Month: July 2015
REMEMBER THE OPENING scene of the first Fast and Furious film when bandits hijacked a truck to steal its cargo? Or consider the recent real-life theft of $4 million in gold from a truck transiting from Miami to Massachusetts. Heists like these could become easier to pull off thanks to security flaws in systems used for tracking valuable shipments and assets.
Vulnerabilities in asset-tracking systems made by Globalstar and its subsidiaries would allow a hijacker to track valuable and sensitive cargo—such as electronics, gas and volatile chemicals, military supplies or possibly even nuclear materials—disable the location-tracking device used to monitor it, then spoof the coordinates to make it appear as if a hijacked shipment was still traveling its intended route. Or a hacker who just wanted to cause chaos and confusion could feed false coordinates to companies and militaries monitoring their assets and shipments to make them think they’d been hijacked, according to Colby Moore, a researcher with the security firm Synack, who plans to discuss the vulnerabilities next week at the Blackhatand Def Con security conferences in Las Vegas.
The same vulnerable technology isn’t used just for tracking cargo and assets, however. It’s also used in people-tracking systems for search-and-rescue missions and in SCADA environments to monitor high-tech engineering projects like pipelines and oil rigs to determine, for example, if valves are open or closed in areas where phone, cellular and Internet service don’t exist. Hackers could exploit the same vulnerabilities to interfere with these systems as well, Moore says.
The tracking systems consist of devices about the size of a hand that are attached to a shipping container, vehicle or equipment and communicate with Globalstar’s low-earth orbiting satellites by sending them latitude and longitude coordinates or, in the case of SCADA systems, information about their operation. A 2003 article about the technology, for example, indicated that the asset trackers could be configured to monitor and trigger an alertwhen certain events occurred such as the temperature rising above a safe level in a container or the lock on a container being opened. The satellites relay this information to ground stations, which in turn transmit the data via the Internet or phone networks to the customer’s computers.
According to Moore, the Simplex data network that Globalstar uses for its satellites doesn’t encrypt communication between the tracking devices, orbiting satellites and ground stations, nor does it require the communication be authenticated so that only legitimate data gets sent. As a result, someone can intercept the communication, spoof it or jam it.
“The integrity of the whole system is relying on a hacker not being able to clone or tamper with a device,” says Moore. “The way Globalstar engineered the platform leaves security up to the end integrator, and so far, no one has implemented security.”
Simplex data transmissions are also one-way from device to satellite to ground station, which means there is no way to ping back to a device to verify that the data transmitted was accurate if the device has only satellite capability (some of the more expensive Globalstar tracking devices combine satellite and cell network communication for communicating in areas where network coverage is available).
Moore says he notified Globalstar about the vulnerabilities about six months ago, but the company was noncommittal about fixing them. The problems, in fact, cannot be implemented with simple software patches. Instead, to add encryption and authentication, the protocol for the communication would have to be re-architected.
Globalstar did not respond to a request from WIRED for comment.
Top Companies Rely on Globalstar Satellites
Globalstar has more than four dozen satellites in space, and it’s considered one of the largest providers of satellite voice and data communications in the world. Additionally, its satellite asset-tracking systems—such as the SmartOne, SmartOne B and SmartOne C—provide service to a wide swath of industry, including oil and gas, mining, forestry, commercial fishing, utilities, and the military. Asset-tracking systems made by Globalstar and its subsidiaries Geforce and Axon can be used to track fleets of armored cars, cargo-shipping containers, maritime vessels, and military equipment or simply expensive construction equipment. Geforce’s customers include such bigwigs as BP, Halliburton, GE Oil and Gas, Chevron and Conoco Phillips. Geforce markets its trackers for use with things like acid and fuel tanks, railway cars, and so-called “frac tanks” used in fracking operations.
The company noted in a press release this year that since the launch of its initial SmartOne asset-tracking system in 2012, more than 150,000 units were being used in multiple industries, including aviation, alternative energy and the military.
In addition to asset-tracking, Globalstar produces a personal tracking system known as the SPOT Satellite Messenger for hikers, sailors, pilots and others who travel in remote areas where cell coverage might not be available so that emergency service personnel can find them if they become lost or separated from their vehicle.
Moore tested three Globalstar devices that he bought for tracking assets and people, but he says all systems that communicate with the Globalstar satellites use the same Simplex protocol and would therefore be vulnerable to interference. He also thinks the problem may not be unique to Globalstar trackers. “I would expect to see similar vulnerabilities in other systems if we were to look at them further,” he says.
The Simplex network uses a secret code to encode all data sent through it, but Moore was able to easily reverse-engineer it to determine how messages get encoded in order to craft his own. “The secret codes are not generated on the fly and are not unique. Instead, the same code is used for all the devices,” he says.
Moore spent about $1,000 in hardware to build a transceiver to intercept data from the tracking devices he purchased, and an additional $300 in software and hardware for analyzing the data and mimicking a tracking device. Although he built his own transceiver, thieves would really only need a proper antenna and a universal software radio peripheral. With these, they could intercept satellite signals to identify a shipment of valuable cargo, track its movement and transmit spoofed data. While seizing the goods, they could disable the vehicle’s tracking device physically or jam the signals while sending spoofed location data from a laptop to make it appear that the vehicle or shipment was traveling in one location when it’s actually in another.
Each device has a unique ID that’s printed on its outer casing. The devices also transmit their unique ID when communicating with satellites, so an attacker targeting a specific shipment could intercept and spoof the communication.
In most cases, attackers would want to know in advance, before hijacking a truck or shipment, what’s being transported. But an attacker could also just set up a receiver in an area where valuable shipments are expected to pass and track the assets as they move.
“I put this on a tower on a large building and all the locations of devices [in the area] are being monitored,” Moore says. “Can I find a diamond shipment or a nuclear shipment that it can track?”
It’s unclear how the military is using Globalstar’s asset-tracking devices, but conceivably if they’re being used in war zones, the vulnerabilities Moore uncovered could be used by adversaries to track supplies and convoys and aim missiles at them.
Often the unique IDs on devices are sequential, so if a commercial or military customer owns numerous devices for tracking assets, an attacker would be able to determine other device IDs, and assets, that belong to the same company or military based on similar ID numbers.
Moore says security problems like this are endemic when technologies that were designed years ago, when security protocols were lax, haven’t been re-architected to account for today’s threats.
“We rely on these systems that were architected long ago with no security in mind, and these bugs persist for years and years,” he says. “We need to be very mindful in designing satellite systems and critical infrastructure, otherwise we’re going to be stuck with these broken systems for years to come.”
PUT A COMPUTER on a sniper rifle, and it can turn the most amateur shooter into a world-class marksman. But add a wireless connection to that computer-aided weapon, and you may find that your smart gun suddenly seems to have a mind of its own—and a very different idea of the target.
At the Black Hat hacker conference in two weeks, security researchers Runa Sandvik and Michael Auger plan to present the results of a year of work hacking a pair of $13,000 TrackingPoint self-aiming rifles. The married hacker couple have developed a set of techniques that could allow an attacker to compromise the rifle via its Wi-Fi connection and exploit vulnerabilities in its software. Their tricks can change variables in the scope’s calculations that make the rifle inexplicably miss its target, permanently disable the scope’s computer, or even prevent the gun from firing. In a demonstration for WIRED (shown in the video above), the researchers were able to dial in their changes to the scope’s targeting system so precisely that they could cause a bullet to hit a bullseye of the hacker’s choosing rather than the one chosen by the shooter.
“You can make it lie constantly to the user so they’ll always miss their shot,” says Sandvik, a former developer for the anonymity software Tor. Or the attacker can just as easily lock out the user or erase the gun’s entire file system. “If the scope is bricked, you have a six to seven thousand dollar computer you can’t use on top of a rifle that you still have to aim yourself.”
Since TrackingPoint launched in 2011, the company has sold more than a thousand of its high-end, Linux-power rifles with a self-aiming system. The scope allows you to designate a target and dial in variables like wind, temperature, and the weight of the ammunition being fired. Then, after the trigger is pulled, the computerized rifle itself chooses the exact moment to fire, activating its firing pin only when its barrel is perfectly oriented to hit the target. The result is a weapon that can allow even a gun novice to reliably hit targets from as far as a mile away.
But Sandvik and Auger found that they could use a chain of vulnerabilities in the rifle’s software to take control of those self-aiming functions. The first of these has to do with the Wi-Fi, which is off by default, but can be enabled so you can do things like stream a video of your shot to a laptop or iPad. When the Wi-Fi is on, the gun’s network has a default password that allows anyone within Wi-Fi range to connect to it. From there, a hacker can treat the gun as a server and access APIs to alter key variables in its targeting application. (The hacker pair were only able to find those changeable variables by dissecting one of the two rifles they worked with, using an eMMC reader to copy data from the computer’s flash storage with wires they clipped onto its circuit board pins.)
In the video demonstration for WIRED at a West Virginia firing range, Auger first took a shot with the unaltered rifle and, using the TrackingPoint rifle’s aiming mechanism, hit a bullseye on his first attempt. Then, with a laptop connected to the rifle via Wi-Fi, Sandvik invisibly altered the variable in the rifle’s ballistic calculations that accounted for the ammunition’s weight, changing it from around .4 ounces to a ludicrous 72 pounds. “You can set it to whatever crazy value you want and it will happily accept it,” says Sandvik.
Sandvik and Auger haven’t figured out why, but they’ve observed that higher ammunition weights aim a shot to the left, while lower or negative values aim it to the right. So on Auger’s next shot, Sandvik’s change of that single number in the rifle’s software made the bullet fly 2.5-feet to the left, bullseyeing an entirely different target.
The only alert a shooter might have to that hack would be a sudden jump in the scope’s view as it shifts position. But that change in view is almost indistinguishable from jostling the rifle. “Depending on how good a shooter you are, you might chalk that up to ‘I bumped it,’” says Sandvik.
The two hackers’ wireless control of the rifle doesn’t end there. Sandvik and Auger found that through the Wi-Fi connection, an attacker could also add themselves as a “root” user on the device, taking full control of its software, making permanent changes to its targeting variables, or deleting files to render the scope inoperable. If a user has set a PIN to limit other users’ access to the gun, that root attack can nonetheless gain full access and lock out the gun’s owner with a new PIN. The attacker can even disable the firing pin, a computer controlled solenoid, to prevent the gun from firing.
One thing their attack can’t do, the two researchers point out, is cause the gun to fire unexpectedly. Thankfully TrackingPoint rifles are designed not to fire unless the trigger is manually pulled.
In a phone call with WIRED, TrackingPoint founder John McHale said that he appreciates Sandvik and Auger’s research, and that the company will work with them to develop a software update to patch the rifle’s hackable flaws as quickly as possible. When it’s ready, that update will be mailed out to customers as a USB drive, he said. But he argued that the software vulnerabilities don’t fundamentally change the gun’s safety. “The shooter’s got to pull the rifle’s trigger, and the shooter is responsible for making sure it’s pointed in a safe direction. It’s my responsibility to make sure my scope is pointed where my gun is pointing,” McHale says. “The fundamentals of shooting don’t change even if the gun is hacked.”
He also pointed out that the Wi-Fi range of the hack would limit its real-world use. “It’s highly unlikely when a hunter is on a ranch in Texas, or on the plains of the Serengeti in Africa, that there’s a Wi-Fi internet connection,” he says. “The probability of someone hiding nearby in the bush in Tanzania are very low.”
But Auger and Sandvik counter that with their attack, a hacker could alter the rifle in a way that would persist long after that Wi-Fi connection is broken. It’s even possible (although likely difficult), they suggest, to implant the gun with malware that would only take effect at a certain time or location based on querying a user’s connected phone.
In fact, Auger and Sandvik have been attempting to contact TrackingPoint to help the company patch its rifles’ security flaws for months, emailing the company without response. The company’s silence until WIRED’s inquiry may be due to its financial problems: Over the last year, TrackingPoint haslaid off the majority of its staff, switched CEOs and even ceased to take new orders for rifles. McHale insists that the company hasn’t gone out of business, though it’s “working through an internal restructuring.”
Given TrackingPoint’s financial straits, Sandvik and Auger say they won’t release the full code for their exploit for fear that the company won’t have the manpower to fix its software. And with only a thousand vulnerable rifles in consumers’ hands and the hack’s limited range, it may be unlikely that anyone will actually be victimized by the attack.
But the rifles’ flaws signal a future where objects of all kinds are increasingly connected to the Internet and are vulnerable to hackers—including lethal weapons. “There are so many things with the Internet attached to them: cars, fridges, coffee machines, and now guns,” says Sandvik. “There’s a message here for TrackingPoint and other companies…when you put technology on items that haven’t had it before, you run into security challenges you haven’t thought about before.”
**Este artículo es sólo para fines educativos**
Las infraestructuras críticas como plataformas petroleras y reactores nucleares tienen sofisticado nivel de seguridad para proteger contra los ataques cibernéticos. Sin embargo hackers están pensando un paso por delante de los profesionales de seguridad para hackear a la infraestructura crítica. Las infraestructuras críticas tienen las redes aisladas por lo tanto es muy difícil llegar a través del mundo exterior. Por esta razón, los hackers han desarrollado malware como Stuxnet y Flame, que propagan a través de dispositivos USB como en esas redes intercambian gran cantidad de información a través de dispositivos de memoria USB.
Memorias USB son dispositivos de almacenamiento de memoria reutilizables que se conectan al puerto USB de un ordenador y se conocen comúnmente como unidades flash o tarjetas de memoria. Usted puede borrar unidades USB cualquier número de veces y puede utilizarlas para diferentes propósitos.
Las memorias USB son tan comunes en estos días que los hackers han comenzado a escribir el malware específicamente para memoria USB. Con uso de estos malware los hackers son capaces de hackear redes aisladas como en las plantas nucleares. En este artículo vamos a hablar sobre el malware relacionado con USB con la ayuda de expertos en soluciones de seguridad informática.
DISEÑO DE DISCO USB
Una unidad flash USB es un dispositivo de almacenamiento de datos que incluye memoria flash con una interfaz Universal Serial Bus (USB) integrada. Una unidad flash consiste en una pequeña placa de circuito impreso con los elementos del circuito y un conector USB, aislados eléctricamente y protegidos dentro de un plástico, metal, o el caso de goma. La mayoría de las unidades flash utilizan una conexión estándar de tipo A de USB que permite la conexión con un puerto en un ordenador, pero también existen unidades de otras interfaces. Unidades flash USB consumen energía desde el ordenador a través de la conexión USB.
A continuación se mencionan las partes de una unidad flash:
- Standard-A conector USB – proporciona una interfaz física para el equipo host.
- Controlador de almacenamiento masivo USB – un pequeño microcontrolador con una pequeña cantidad de ROM en el chip y la memoria RAM.
- Chip(s) de memoria flash NAND – almacena los datos (flash NAND es típicamente usado también en las cámaras digitales).
- Cristal oscilador – produce señal de reloj de 12 MHz principal del dispositivo y los controles la salida de datos del dispositivo a través de un bucle de enganche de fase.
- Cubierta – típicamente hecha de plástico o metal para proteger la electrónica contra el estrés mecánico e incluso posibles cortocircuitos.
- Jumpers y test pins – para las pruebas durante la fabricación o carga del firmware de la unidad flash en el microcontrolador.
- LEDs – indican transferencias de datos.
- Write-protect switches – activar o desactivar la escritura de datos en la memoria.
- Espacio despoblado – proporciona espacio para incluir un segundo chip de memoria. Tener este segundo espacio permite al fabricante utilizar una sola placa de circuito impreso para más de un dispositivo de tamaño de almacenamiento.
- Algunas unidades ofrecen un almacenamiento ampliable a través de un slot para tarjeta de memoria interna, como un lector de tarjetas de memoria.
La mayoría de las unidades flash vienen con formato previo de FAT32 o sistemas de archivos exFAT. Los sectores son 512 bytes de longitud, para la compatibilidad con unidades de disco duro, y el primer sector puede contener master boot record y una tabla de particiones.
Hay dos tipos de malware de USB primero es el malware de firmware del disco USB y segundo es el malware de ordenador normal que se solo ejecuta en discos USB y se llama Ghost malware. Vamos a cubrir más detalles de cada uno de estos malwares y cómo hackers están utilizándolos para piratear en redes aisladas de infraestructuras críticas como plantes de energía eléctrica, reactores nucleares etc.
1. Malware basado en firmware del microcontrolador USB
Los hackers hacen este malware con reprogramación del firmware de controlador de almacenamiento masivo de las unidades USB. A medida que se inyecta el malware dentro del firmware, que está en el microcontrolador y no en la memoria flash (donde guardamos nuestros archivos).
Mike Stevens, experto de formación de seguridad informática explica que una vez que se inyecta el malware dentro del firmware del disco USB puede hacer lo siguiente
- El malware de firmware de microcontrolador puede emular un teclado y emitir comandos en nombre del usuario que ha iniciado sesión, por ejemplo, dando acceso de root al hacker e infectar a otros dispositivos en la red.
- El disco USB puede actuar como tarjeta de red y cambiar DNS del equipo para redirigir el tráfico.
La confianza dada por los sistemas operativos como Windows, Mac y Linux a dispositivos de interfaz humana (HID), tales como teclados, tarjetas de red es la razón detrás de este ataque. Como aparecen las actividades realizadas por el malware, como si un usuario ha iniciado sesión para hacer esas actividades. El USB con malware en el firmware se detecta como un HID por un sistema operativo, y el malware se ejecuta la secuencia de comandos para dar control de root a hacker. Antivirus no puede detectar este tipo de amenaza como anti-virus piensa que un usuario ha iniciado sesión y dio acceso a otra persona de confianza.
Hay 3 diferentes tipos de ataques basado en firmware de controlador de almacenamiento masivo USB.
Como se explicó antes experto de formación de seguridad informática que el atacante tendrá un disco USB normal que contiene un pequeño microcontrolador, inyecte el malware en el firmware y toma el control de root de la computadora con la ayuda de este malware. Este tipo de USB se llama BADUSB.
Tipo de ataques con BADUSB
- Pretender como USB de 4 GB sin embargo, tiene un espacio de 32 GB donde se utilizará resto del espacio para copiar los datos y después cargar al servidor remoto. Así, cuando se formatea el disco solo borra 4 GB de espacio.
- Pretender como un teclado o mouse.
- Pretender como una tarjeta de red.
- Pretender como un teléfono o tableta.
- Pretender como una cámara web.
- Pretender como un token de autenticación del banco.
- Pretender como impresoras y escáneres.
- Pretender como conector de Tipo-C de luz y datos para el nuevo MacBook, Chromebook Pixel. A pesar de su versatilidad, Tipo-C todavía se basa en el estándar USB, que lo hace vulnerable a un ataque de firmware. Por lo tanto sería un ataque a través de cable de luz.
CÓMO CREAR BADUSB
PASO 1. Revise los detalles del microcontrolador
El primero verifique los detalles sobre el controlador y firmware asociado. Necesitamos un software como ChipGenius, CheckUDisk, UsbIDCheck, USBDeview para determinar eso. Estos son programas de código abierto y están fácilmente disponibles. Ellos le proporcionarán Vendor Chip, Part-Number, Vendedor del producto, Modelo de producto, VID, PID.
PASO 2. Restaurar el firmware original y comprobar el firmware (Paso opcional)
Usted puede utilizar este paso para reparar su USB también si por alguna razón esta muerta la unidad USB. Usted puede visitar el sitio web como flashboot.ru y comprobar el programa para restaurar.
Puede utilizar VID y PID encontrado en el paso anterior para buscar el programa para restaurar el firmware. Puede descargar la herramienta MP (mass production) como herramienta USBest UT16 acuerdo a su PID, VID y luego actualizar el controlador. Esto restaurará su USB completamente como nuevo USB según expertos en soluciones de seguridad informática.
PASO 3. Preparación para la inyección en firmware con malware
Vamos a cubrir el escenario de las memorias USB de Toshiba que tienen controlador de Phison. Las herramientas necesarias están disponibles en GitHub.
- Es necesario instalar Windows con .NET 4.0 instalado y Visual Studio 2012
- SDCC (Small Device C Compiler) Suite en C: \ Archivos de programa \ SDCC (para la construcción del firmware y parches) y reinicie el equipo después de instalar estos.
- Haga doble clic en DriveCom.sln, este se ejecuta en Visual Studio. Ejecute el proyecto y compilar. Entonces el DriveCom.exe está en la carpeta de herramientas.
- Haga lo mismo con EmbedPayload.sln y del inyector.
- Ejecuta DriveCom como a continuación para obtener información sobre la unidad:
DriveCom.exe /drive=E /action=GetInfo
donde E es la letra de unidad. Esto debe decirle el tipo de controlador que tiene (como PS2251-03 (2303)) y el ID único de su chip flash.
PASO 4. Antes de realizar la operación de flashing de firmware
Para flashing necesitará burner images. Estas imágenes de burners se nombran normalmente utilizando la siguiente convención:
donde xx es la versión del controlador (por ejemplo, 03 por PS2251-03 (2303)), yyy es el número de versión (irrelevante), y z indica el tamaño de la página.
z puede ser:
2KM – indica que esto es para los chips NAND 2K.
4KM – indica que esto es para los chips NAND 4K.
M – indica es para los chips NAND 8K.
Puede descargar Burner images de Internet desde sitios web como usbdev.ru.
Para construir el firmware personalizado, abre el terminal de comandos en el directorio “firmware” y ejecutar build.bat. Puedes probar con FW03FF01V10353M.BIN como 1.03.53.
El archivo resultante será un firmware \bin\fw.bin, que luego se puede flash en su unidad USB.
También producirá un archivo firmware\bin\bn.bin, que es el equivalente burner image del código.
PASO 5. Cargar el firmware
Una vez que tenga la imagen, entrar en el modo de arranque ejecutando:
DriveCom.exe /drive=E /action=SetBootMode
donde E es la letra de unidad. Puede transferir y ejecutar burner image través de:
DriveCom.exe /drive=E /action=SendExecutable /burner=[burner]
donde E es la letra de unidad y [burner] es el nombre del archivo de imagen de burner.
Puede cargar el firmware mediante la ejecución de:
DriveCom.exe /drive=E /action=DumpFirmware /firmware=[firmware]
donde E es la letra de unidad y [firmware] es el nombre del archivo de destino.
PASO 6. Inyectar el malware en el firmware
Aquí va a necesitar su exploit con carga útil, según profesor de formación de hacking ético de IICS puede aprender a crear una carga útil de exploit y inyectar en código durante la formación de hacking ético. Sin embargo también puede obtener un script de página de GitHub de Rubber Ducky y con la ayuda de Duckencoder puede crear un archivo inject.bin de su script.
Usted puede inyectar la carga útil en el firmware mediante la ejecución de:
EmbedPayload.exe inject.bin fw.bin
Dónde inject.bin es su script de Rubber Ducky compilado y fw.bin es la imagen del firmware personalizado.
PASO 7. Flashing el firmware de controlador de disco USB.
Una vez que tenga la imagen del burner y el firmware, ejecuta:
DriveCom.exe /drive=[letter] /action=SendFirmware /burner=[burner] /firmware=[firmware]
donde [letter] es la letra de la unidad, [burner] es el nombre de la imagen burner, y [firmware] es el nombre de la imagen de firmware.
Los pasos anteriores dará método para la creación de BADUSB y esto USB se puede utilizar para hacking ético y hacer pruebas de penetración. También puede crear tarjetas SD como BADSD que puede utilizar en teléfonos y tabletas para hackear ellos. A continuación se muestra el video de investigadores de soluciones de seguridad informática que muestran cómo modificar el firmware de la tarjeta SD e inyectar el malware en la tarjeta.
1.2 USB Rubber Ducky – UKI (USB Key Injector)
En lugar de crear su propio firmware USB también se puede comprar USB que se venden en mercados como Rubber Ducky USB o UKI (USB Key Inyector). Usted puede aprender más sobre USB Key Inyector y Rubber Ducky USB en formación de seguridad informática de International Institute of Cyber Security.
1.3 Placa de Teensy Microcontrolador
El uso de una placa Microcontrolador Teensy con varios tipos de software con el fin de imitar los dispositivos HID es el método más tradicional. Usted puede aprender más sobre Teensy en la formación de hacking ético.
2. GHOST USB Malware
Esto es como un malware normal, pero sólo se ejecuta en dispositivos USB y cuando esté dentro de una computadora no hace ninguna actividad. Los delincuentes utilizar estos métodos para comprometer las redes aisladas que no están accesibles a través de Internet. El malware de este tipo que fue descubierto recientemente fue FLAME. En el caso de la Flame, el malware crea una carpeta que no podía ser visto por un PC con Windows, ocultando el malware y los documentos robados del usuario, dicen los expertos de soluciones de seguridad informática. Esto abrió la posibilidad de que las personas llevan sin saberlo Flame de PC a PC. Unidades USB con Ghost Malware son eficaces en las redes aisladas donde hay un montón de información confidencial, ya que las unidades de almacenamiento portátiles se utilizan normalmente para transferir datos entre computadoras en redes aisladas.
Flame puede extenderse a otros sistemas a través de una red local (LAN) o través de una memoria USB. Se puede grabar audio, capturas de pantalla, la actividad del teclado y el tráfico de la red. El programa también registra las conversaciones de Skype y puede convertir ordenadores infectados en transmisores de Bluetooth, que intentan descargar la información de los dispositivos cercanos habilitados con Bluetooth. Estos datos, junto con los documentos almacenados localmente, se envía a uno de los varios servidores de comando y control de los piratas informáticos y después el malware puede tomar nuevas instrucciones de estos servidores.
Medidas de prevención
Cómo protegerse de BADUSB, USB Rubber Ducky tipo de dispositivos
De acuerdo con experto de soluciones de seguridad informática de plantas nucleares Taylor Reed de iicybsecurity usted puede tomar los siguientes pasos.
- Conecte sólo dispositivos USB de los vendedores que usted conoce y dispositivos USB de confianza. Para las infraestructuras críticas como plantas nucleares y plataformas petroleras, utiliza dispositivos que tienen firmware firmado y asegurado por el vendedor en caso de que alguien trata de romper el firmware, los dispositivos no funcionarán.
- Mantenga su programa de antimalware actualizado. No va a escanear el firmware pero debe detectar si el BadUSB intenta instalar o ejecutar malware.
- Implementar soluciones de seguridad informática por adelantado que vigilaría el uso de los dispositivos conectados a su ordenador y cualquier teclado USB adicional será bloqueado.
Cómo protegerse del GHOST USB Malware
- Mantenga su programa de antimalware actualizado.
- Utilice Honeypot de Ghost USB. Ghost honeypot es un honeypot para la detección de malware que se propaga a través de dispositivos USB.
- Actualmente el honeypot es compatible con Windows XP y Windows 7. La forma Ghost funciona es que primero trata de emular como una unidad flash USB. Si el malware lo identifica como una unidad flash USB, será engañar el malware en infectar a ella. Ghost luego mira para solicitudes basadas en escritura en la unidad, que es una indicación de un malware. Usted puede aprender más acerca de Ghost honeypot USB en la formación de hacking ético.
El malware de USB son muy peligrosos y debería implementarse medidas inmediatas para asegurar la infraestructura de TI con la ayuda de expertos en seguridad informática.
The US Census Bureau Director, John H. Thompson, revealed on Friday that his institution experienced a data breach the past week, but no sensitive or private information was leaked.
On a post on the bureau’s blog penned by Mr. Thompson himself, he revealed how attackers got access to an external facing database belonging to the Federal Audit Clearinghouse.
This database contained details about the names of the person submitting information to the US Census Bureau, organization addresses, phone numbers, usernames, and other types of data the bureau did not consider confidential.
Regarding private information collected from US citizens and businesses, Mr. Thompson said, “That information remains safe, secure and on an internal network segmented apart from the external site and the affected database. Over the last three days, we have seen no indication that there was any access to internal systems.”
The group Anonymous Operations is to blame for the attack
The breach was announced on Twitter by a hacker group calling itself Anonymous Operations and was carried out in protest to the TTIP (Transatlantic Trade and Investment Partnership) and TTP (Trans-Pacific Partnership) trade agreements.
The tweet also contained a link to their own website, where four other URLs linked to the info obtained in the data breach.
The nationality of the hackers is unknown, but their anger against the TTP and TTIP agreements should narrow down the search.
While not as severe as other attacks on US government bodies, the bureau’s IT staff took the servers offline within 90 minutes after having found out of the attack, and this is how they’ll remain until their investigation completes.
From initial findings, “it appears the database was compromised through a configuration setting that allowed the attacker to gain access to the four files posted to the hacker’s site,” said Mr. Thompson.
THE MOST SENSITIVE work environments, like nuclear power plants, demand the strictest security. Usually this is achieved by air-gapping computers from the Internet and preventing workers from inserting USB sticks into computers. When the work is classified or involves sensitive trade secrets, companies often also institute strict rules against bringing smartphones into the workspace, as these could easily be turned into unwitting listening devices.
But researchers in Israel have devised a new method for stealing data that bypasses all of these protections—using the GSM network, electromagnetic waves and a basic low-end mobile phone. The researchers are calling the finding a “breakthrough” in extracting data from air-gapped systems and say it serves as a warning to defense companies and others that they need to immediately “change their security guidelines and prohibit employees and visitors from bringing devices capable of intercepting RF signals,” says Yuval Elovici, director of the Cyber Security Research Center at Ben-Gurion University of the Negev, where the research was done.
The attack requires both the targeted computer and the mobile phone to have malware installed on them, but once this is done the attack exploits the natural capabilities of each device to exfiltrate data. Computers, for example, naturally emit electromagnetic radiation during their normal operation, and cell phones by their nature are “agile receivers” of such signals. These two factors combined create an “invitation for attackers seeking to exfiltrate data over a covert channel,” the researchers write in a paper about their findings.
The research builds on a previous attack the academics devised last year using a smartphone to wirelessly extract data from air-gapped computers. But that attack involved radio signals generated by a computer’s video card that get picked up by the FM radio receiver in a smartphone.
The new attack uses a different method for transmitting the data and infiltrates environments where even smartphones are restricted. It works with simple feature phones that often are allowed into sensitive environments where smartphone are not, because they have only voice and text-messaging capabilities and presumably can’t be turned into listening devices by spies. Intel’s manufacturing employees, for example, can only use “basic corporate-owned cell phones with voice and text messaging features” that have no camera, video, or Wi-Fi capability, according to a company white paper citing best practices for its factories. But the new research shows that even these basic Intel phones could present a risk to the company.
“[U]nlike some other recent work in this field, [this attack] exploits components that are virtually guaranteed to be present on any desktop/server computer and cellular phone,” they note in their paper.
Though the attack permits only a small amount of data to be extracted to a nearby phone, it’s enough to allow to exfiltrate passwords or even encryption keys in a minute or two, depending on the length of the password. But an attacker wouldn’t actually need proximity or a phone to siphon data. The researchers found they could also extract much more data from greater distances using a dedicated receiver positioned up to 30 meters away. This means someone with the right hardware could wirelessly exfiltrate data through walls from a parking lot or another building.
Although someone could mitigate the first attack by simply preventing all mobile phones from being brought into a sensitive work environment, to combat an attack using a dedicated receiver 30 meters away would require installing insulated walls or partitions.
The research was conducted by lead researcher Mordechai Guri, along with Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Elovici. Guri will present their findings next month at the Usenix Security Symposium in Washington, DC. A paper describing their work has been published on the Usenix site, though it’s currently only available to subscribers. A video demonstrating the attack has also been published online.
Data leaks via electromagnetic emissions are not a new phenomenon. So-called TEMPEST attacks were discussed in an NSA article in 1972. And about 15 years ago, two researchers published papers demonstrating how EMR emissions from a desktop computer could be manipulated through specific commands and software installed on the machine.
The Israeli researchers built on this previous knowledge to develop malware they call GSMem, which exploits this condition by forcing the computer’s memory bus to act as an antenna and transmit data wirelessly to a phone over cellular frequencies. The malware has a tiny footprint and consumes just 4 kilobytes of memory when operating, making it difficult to detect. It also consists of just a series of simple CPU instructions that don’t need to interact with the API, which helps it to hide from security scanners designed to monitor for malicious API activity.
The attack works in combination with a root kit they devised, called the ReceiverHandler, that gets embedded in the baseband firmware of the mobile phone. The GSMem malware could be installed on the computer through physical access or through interdiction methods—that is, in the supply chain while it is enroute from the vendor to the buyer. The root kit could get installed through social engineering, a malicious app or through physical access to the targeted phone.
The Nitty Gritty
When data moves between the CPU and RAM of a computer, radio waves get emitted as a matter of course. Normally the amplitude of these waves wouldn’t be sufficient to transmit messages to a phone, but the researchers found that by generating a continuous stream of data over the multi-channel memory buses on a computer, they could increase the amplitude and use the generated waves to carry binary messages to a receiver.
Multi-channel memory configurations allow data to be simultaneously transferred via two, three, or four data buses. When all these channels are used, the radio emissions from that data exchange can increase by 0.1 to 0.15 dB.
The GSMem malware exploits this process by causing data to be exchanged across all channels to generate sufficient amplitude. But it does so only when it wants to transmit a binary 1. For a binary 0, it allows the computer to emit at its regular strength. The fluctuations in the transmission allow the receiver in the phone to distinguish when a 0 or a 1 is being transmitted.
“A ‘0’ is determined when the amplitude of the signal is that of the bus’s average casual emission,” the researchers write in their paper. “Anything significantly higher than this is interpreted as a binary ‘1’.”
The receiver recognizes the transmission and converts the signals into binary 1s and 0s and ultimately into human-readable data, such as a password or encryption key. It stores the information so that it can later be transmitted via mobile-data or SMS or via Wi-Fi if the attack involves a smartphone.
The receiver knows when a message is being sent because the transmissions are broken down into frames of sequential data, each composed of 12 bits, that include a header containing the sequence “1010.” As soon as the receiver sees the header, it takes note of the amplitude at which the message is being sent, makes some adjustments to sync with that amplitude, then proceeds to translate the emitted data into binary. They say the most difficult part of the research was designing the receiver malware to decode the cellular signals.
For their test, the researchers used a nine-year-old Motorola C123 phone with Calypso baseband chip made by Texas Instruments, which supports 2G network communication, but has no GPRS, Wi-Fi, or mobile data capabilities. They were able to transmit data to the phone at a rate of 1 to 2 bits per second, which was sufficient to transmit 256-bit encryption keys from a workstation.
They tested the attack on three work stations with different Microsoft Windows, Linux, and Ubuntu configurations. The experiments all took place in a space with other active desktop computers running nearby to simulate a realistic work environment in which there might be a lot of electromagnetic noise that the receiver has to contend with to find the signals it needs to decode.
Although the aim of their test was to see if a basic phone could be used to siphon data, a smartphone would presumably produce better results, since such phones have better radio frequency reception. They plan to test smartphones in future research.
But even better than a smartphone would be a dedicated receiver, which the researchers did test. They were able to achieve a transmission rate of 100 to 1,000 bits per second using a dedicated hardware and receiver from up to 30 meters away, instead of a proximity phone. They used GNU-Radio software, a software-defined radio kit, and an Ettus Research Universal Software Radio Peripheral B210.
Although there are limits to the amount of data any of these attacks can siphon, even small bits of data can be useful. In addition to passwords, an attacker could use the technique to siphon the GPS coordinates of sensitive equipment to determine its location—for example, a computer being used to operate a covert nuclear program in a hidden facility. Or it could be used to siphon the RSA private key that the owner of the computer uses to encrypt communications.
“This is not a scenario where you can leak out megabytes of documents, but today sensitive data is usually locked down by smaller amounts of data,” says says Dudu Mimran, CTO of the Cyber Security Research Center. “So if you can get the RSA private key, you’re breaking a lot of things.”
Valve’s Steam is the biggest platform in the PC gaming market, with Valve themselves being one of the most prominent companies in the gaming industry as a whole. Steam has millions of accounts all over the world, and in some cases people have invested literally thousands of dollars into their own accounts. Which is why a security breach like the one that just occurred a few days ago is something to take very seriously.
Reports are still blurry and information keeps coming out – Valve themselves are yet to make an official statement on the issue – but according to a demonstration that was posted on YouTube, a hacker could abuse the “forgotten password” feature in Steam’s log-in service, completely bypassing the stage where they have to enter a security code, and being granted access to reset the password of the account.
All an attacker needs to carry out this exploit is the account name of a Steam user. It’s not yet clear if Steam Guard offers sufficient protection from the exploit, as there have been some reports from users claiming that their accounts have been compromised even with Steam Guard enabled.
Valve have closed the loophole already, but not before significant amounts of damage were done to many users. Among the affected are various prominent Twitch streamers, who’ve had their accounts hijacked and locked down. Valve have apparently started to impose a 5-day “ban” on accounts that have been compromised in the incident, but it’s not clear if there will be any additional consequences for those who have been affected.
Some users have been worried about the possibility of “VAC bans” – Valve’s anti-cheat system is quite notorious for its permanent bans, and even in cases where users have had their accounts hijacked, Valve typically never revert these bans.
On the other hand, users who actively trade on the Steam Market have been worried that they might lose some of their hard-earned items, which is a real danger now that their accounts have been compromised. This could be one of the reasons for the 5-day lockdown, as it would allow Valve to carefully sort out the mess without people trading and getting in their way.
Some have pointed out that Valve’s silence on the matter has been worrying. It’s been nearly 24 hours since the issue started spreading publicly, and considering the large number of potentially compromised accounts, the responsible thing would be to notify users as soon as possible so they can take steps to secure their own accounts.
However, Valve haven’t commented on the situation yet and it’s not clear when they are going to speak up. Various social media sites have been discussing the issue very actively, such as reddit, where it’s already popped up in many popular sections and has been getting a lot of attention.
Users are advised to keep an eye on their e-mail accounts. If an e-mail related to password recovery is received, the user should definitely not ignore it, and proceed to verify that their account is still accessible.
It’s important to note that the information contained in the e-mail itself is not necessary to carry out the attack. Receiving this e-mail is simply a sign that the user is being targeted with the attack. However, some have reported that even changing their password has been ineffective, as the hackers are able to simply keep resetting it over and over again, and there was no good way to stop them.
A new anonymous web browser capable of delivering encrypted data across the dark web at high speeds has been developed by security researchers.
HORNET (High-speed Onion Routing at the Network Layer), created by researchers from Zurich and London, is capable of processing anonymous traffic at speeds of more than 93 Gb/s, paving the way for what academics refer to as “internet-scale anonymity”.
The research paper detailing the anonymity network reveals that it was created in response to revelations concerning widespread government surveillance that came to light through the US National Security Agency (NSA) whistleblower Edward Snowden.
HORNET has also been designed to overcome the flaws identified with other anonymous web browsers, such as Tor.
“Recent revelations about global-scale pervasive surveillance programs have demonstrated that the privacy of internet users worldwide is at risk,” the researchers have stated.
“To protect against these and other surveillance threats, several anonymity protocols, tools, and architectures have been proposed. Tor is the system of choice for over 2 million daily users, but its design as an overlay network suffers from performance and scalability issues: as more clients use Tor, more relays must be added to the network.”
Due to Tor’s system of encryption between the servers or relays that make up its network, web browsing can be a much slower experience than on the open web.
In order to achieve higher speeds, HORNET uses “source-selected paths and shared keys between endpoints and routers to support [anonymous communication]”, meaning that data is not encrypted as often as Tor, but still remains anonymous.
According to its creators, HORNET is also less vulnerable to attacks that have been used to reveal the identity of Tor users. The Tor Project has declined to comment on HORNET until the research has been peer-reviewed.