Feature #11792
Prevent DMA attacks over the LPC bus
0%
Description
The LPC bus is an internal bus in x86 computers connected to the SuperIO which connects various low-speed legacy devices. Unfortunately it also is a vector of DMA attacks, which cannot be mitigated by blacklisting kernel modules in the way Firewire or Thunderbolt can. Any device attached to the LPC bus can send specific signals to the host. One of these signals is called LDRQ#, or “Bus master request”. A device which is bus master can DMA into the host memory. Even though LPC is slow and DMA is limited to about 4 MiB/s read, 6 MiB/s write, that’s plenty to identify kernel structures and overwrite them with shellcode. There are public PoCs which do that already, and law enforcement has already used similar tools.
I believe this is an issue for Tails, both because many people are requesting a lock screen (and it’s likely that one will be added), and because an attacker who gains physical access to an unlocked machine should not be able to get access to raw memory (bugs allowing going back to the greeter screen to enter a new password aside). Because of this, and because Feature #11581 seems in scope, I think it should be within Tails’ threat model to defend from this vector of attack. I don’t believe blacklisting the kernel module it typically uses (lpc_ich) would have any effect on its ability to abuse LDRQ# requests, so a more complex solution than blacklisting the kernel module is required.
There are a few ways I can think of to defend from this. The simplest way I can think of is to load the VFIO kernel module, remove any drivers associated with the LPC bus, and bind the LPC bus’ PCI ID to the VFIO stub driver. This should effectively disable its DMA ability. Obviously, computers without an IOMMU will not be able to do this, and computers with an IOMMU that does not support interrupt remapping or with x2apic disabled in the BIOS may or may not be able to do this effectively. For computers with modern IOMMUs, which are probably the majority of machines running Tails, this technique should eliminate the LPC as a DMA vector, with no regressions or drawbacks of any kind.
The overview of the process should be as simple as:
1) make sure the host has an IOMMU
2) locate LPC bus PCI ID and make sure it is in its own IOMMU group
3) load necessary VFIO drivers
4) unbind any driver the LPC bus is using
5) bind vfio-pci to the LPC bus
Information on LDRQ# (see sections 6 and 7):
http://www.intel.com/design/chipsets/industry/25128901.pdf
Information on VFIO:
https://www.kernel.org/doc/Documentation/vfio.txt
https://www.linux-kvm.org/images/b/b4/2012-forum-VFIO.pdf
Subtasks
History
#1 Updated by emmapeel 2016-09-22 02:33:28
- Assignee set to elouann
#2 Updated by elouann 2017-02-21 18:48:22
- Assignee changed from elouann to intrigeri
This report is far above the level of my skills.. intrigeri, could you please have a look?
#3 Updated by cypherpunks 2017-03-04 02:54:16
Unfortunately I don’t think this will be able to work the way I intended. When I first filed this report, I had assumed that it would be feasible to boot the system with the IOMMU enabled (intel_iommu=on
or amd_iommu=force
), but a later report I filed to get those enabled was rejected because some systems with IOMMUs have broken DMAR tables, and will not boot as a result. In order to isolate IOMMU groups, VFIO requires the IOMMU be present and activated.
There is one other way to prevent devices attached to the PCH from mounting DMA attacks, which is to unset the bus master bit in the command register of the device’s PCI configuration space. The PCH seems to keep a shadow copy to prevent the device from changing it back. With the bus master bit unset, the device cannot become bus master and cannot initiate DMA requests. Unfortunately, in all the SCH datasheets I’ve read, the LPC’s PCI configuration space has the command register (which contains the bus master bit, set by default) set read-only, so it cannot be changed.
Without an active IOMMU, I can’t think of any way to prevent LDRQ#
from initiating a DMA request. I’ll look into it though. It’s not a huge priority because PCIe hotplugging is an easier way to mount a DMA attack, and it isn’t currently protected from, even though it’s low hanging fruit. Plus many laptops don’t even have the LDRQ#
interrupt available on their LPC. But it’s still important enough to look into to raise the bar for attackers high enough that they are forced to use JTAG or logic analyzers.
See section 17.2, table 53, mnemonic PCICMD
, default 0003h (page 354) for the RO bus master bit issue:
https://www-ssl.intel.com/content/dam/www/public/us/en/documents/datasheets/sch-datasheet.pdf
Scroll down to the information on the command register:
http://wiki.osdev.org/PCI#PCI_Device_Structure
#4 Updated by cypherpunks 2017-03-04 03:16:05
Perhaps this and related tickets (e.g. disabling PCI/PCIe hotplugging) should be collected into a parent ticket dedicated to raising the bar for DMA attacks against Tails systems. As it is now, they’re a bit scattered, and without them all in one place, it’d be more difficult to know where we stand in terms of resistance to classical DMA attacks. It’d be a shame to find a wonderful, elegant solution to the LPC problem, just to have Debian’s built-in PCIe hotplugging detection bite us in the butt.
#5 Updated by intrigeri 2017-03-20 10:28:03
- Assignee changed from intrigeri to cypherpunks
- QA Check set to Info Needed
> Any device attached to the LPC bus can send specific signals to the host
May you please describe the attack scenario you have in mind? (probably starting with “I leave my Tails system locked but unattended and an attacker gets physical access to it”) What does an attacker need to do to exploit this?
#6 Updated by intrigeri 2017-03-20 10:30:24
cypherpunks wrote:
> Perhaps this and related tickets (e.g. disabling PCI/PCIe hotplugging) should be collected into a parent ticket dedicated to raising the bar for DMA attacks against Tails systems.
All tickets about external buses should now have Feature #5451 as their parent. Are there other tickets about DMA attacks, that are not about an external bus? If yes, then let’s create a parent ticket to gather them.
#7 Updated by cypherpunks 2017-03-30 05:40:09
intrigeri wrote:
> > Any device attached to the LPC bus can send specific signals to the host
>
> May you please describe the attack scenario you have in mind? (probably starting with “I leave my Tails system locked but unattended and an attacker gets physical access to it”) What does an attacker need to do to exploit this?
This is provided the attacker would like to retrieve sensitive information from physical memory and is allowed prolonged physical access to the computer. I could make a more complete attack tree if required, but I hope this is sufficient.
The scenarios the attacker takes advantage of would be one of the following:
- The Tails system is locked and unattended, and sensitive information is present behind the lock screen.
- The Tails system is unlocked, and sensitive information is left in memory which the amnesia user cannot access.
The attacker would need all of the following to be true:
- The attacker has an advanced understanding of the x86 architecture, and the target is using a modern x86 system.
- The LPC bus has bus master. Datasheets show that the LPC’s command register forces bus master always on.
- The LPC supports the
LDRQ#
signal, needed to initiate DMA requests. I know someone who tried to find laptops that supportedLDRQ#
, but found none after a while. This says nothing of older laptops or desktops, though. - The attacker has a device which is capable of initiating DMA requests. I have not seen any such forensics device on the market, but it would be easy to make, as the protocol is slow, simple, and public.
- No I/OMMU is isolating the LPC and protecting it, such as with DMAR. Tails does not make use of DMAR.
- No chassis intrusion detection/prevention system is installed which shuts down the computer upon breach.
An example of how the attacker would take advantage of this situation:
- An attack device is connected to the LPC bus.
LDRQ#
goes low with the DMA channel number. ACT bit goes high.LDRQ#
goes high.LFRAME#
is asserted.- Using DMA reads, the IDT is located, such as through heuristics identified in the TRESOR-HUNT paper.
- The location pointed to an IDT entry for an interrupt vector is resolved, and hooked with the attacker payload.
- The payload executes when the hooked interrupt handler is called. The original code is then restored.
- The payload copies the rest of the memory, not just the lower 4 GiB limitation, over a faster bus than the LPC.
- The entire contents of memory are obtained and are ready for forensic analysis.
With a (most likely overpriced) user-friendly attack device, this entire process could be simplified to the point where all the attacker needs to do is locate the LPC, plug in a device, and in seconds see the browser history, encryption keys, deleted file history (including deleted files in tmpfs), and more appear, while a USB 3.0 hard drive fills with the system memory.
Regarding the second scenario the attacker may encounter, I believe this is going to be the most common and the most dangerous. There is research showing methods of attacking live distros through memory analysis, such as analyzing tmpfs memory for deleted files and analyzing the Tor process itself. Specifically:
https://media.blackhat.com/bh-dc-11/Case/BlackHat_DC_2011_Case_De-Anonymizing_Live_CDs-wp.pdf
https://www.slideshare.net/AndrewDFIR/deanonymizing-live-cds-through-physical-memory-analysis
#8 Updated by cypherpunks 2017-03-30 05:52:32
intrigeri wrote:
> All tickets about external buses should now have Feature #5451 as their parent. Are there other tickets about DMA attacks, that are not about an external bus? If yes, then let’s create a parent ticket to gather them.
Yes, Feature #11581 is primarily about internal PCI/PCIe devices and incorrectly marked as an external bus issue (even if external buses often interface with PCIe, themselves). An attack involving CardBus or Thunderbolt will not be exploiting PCIe hotplugging. Perhaps Feature #12301 is also related. It is designed to protect against this kind of thing, but went ignored.
#9 Updated by intrigeri 2017-04-01 08:05:45
- Priority changed from Normal to Low
> The attacker would need all of the following to be true:
[…]
> * The LPC supports the LDRQ#
signal, needed to initiate DMA requests. I know someone who tried to find laptops that supported LDRQ#
, but found none after a while. This says nothing of older laptops or desktops, though.
OK, I’m lowering priority then.
> An example of how the attacker would take advantage of this situation:
> * An attack device is connected to the LPC bus.
This means opening the computer chassis and finding the right place to plug this device, right?
But then, one can as well perform a cold boot attack on the RAM modules, which might be easier, no?
#10 Updated by cypherpunks 2017-04-03 01:50:48
intrigeri wrote:
> This means opening the computer chassis and finding the right place to plug this device, right?
> But then, one can as well perform a cold boot attack on the RAM modules, which might be easier, no?
Yeah you have to find the right place, but it’s typically labeled on the board as “LPC” and is not obscurely placed.
A cold boot attack is almost always the last resort because it is very unreliable on modern computers. There have been cases where attackers have gone to the trouble of writing entire custom BIOS code because a classical cold boot attack would be too unreliable. You only get one shot lasting a couple of seconds, some motherboards clear memory when you reboot during POST, some RAM requires being cleared to function after a reset anyway, most modern memory uses LFSR scrambling which, while easy to break, still means you can’t just dump and run strings
on the file, most modern memory interlaces data between each module, etc. And to make it worse for the attacker, newer AMD systems have started encrypting all their memory, and laptop memory, especially LPDDR3/LPDDR4, tends to be soldered in.
#11 Updated by intrigeri 2017-04-05 08:12:36
- Status changed from New to Confirmed
- Type of work changed from Code to Research
#12 Updated by intrigeri 2017-04-05 08:13:54
> intrigeri wrote:
>> This means opening the computer chassis and finding the right place to plug this device, right?
> Yeah you have to find the right place, but it’s typically labeled on the board as “LPC” and is not obscurely placed.
OK.
>> But then, one can as well perform a cold boot attack on the RAM modules, which might be easier, no?
> A cold boot attack is almost always the last resort because it is very unreliable on modern computers. There have been cases where attackers have gone to the trouble of writing entire custom BIOS code because a classical cold boot attack would be too unreliable. You only get one shot lasting a couple of seconds, some motherboards clear memory when you reboot during POST, some RAM requires being cleared to function after a reset anyway, most modern memory uses LFSR scrambling which, while easy to break, still means you can’t just dump and run strings
on the file, most modern memory interlaces data between each module, etc. And to make it worse for the attacker, newer AMD systems have started encrypting all their memory, and laptop memory, especially LPDDR3/LPDDR4, tends to be soldered in.
Thanks. FWIW I meant something like “open the case, spread the thing that makes them cold on the RAM modules so that the content is preserved for a longer time, extract RAM modules and plug them in some device that will dump their content”. This doesn’t sound much more complicated or unreliable than the LPC attack you described, but indeed I forgot about soldered modules :)
So, let’s now assume that we would like to try & protect against attacks over the LPC bus. How could we do that? Re-reading this ticket, it’s not clear to me.
#13 Updated by Anonymous 2018-01-19 13:27:56
ping?
#14 Updated by cypherpunks 2018-06-04 04:22:38
intrigeri wrote:
> Thanks. FWIW I meant something like “open the case, spread the thing that makes them cold on the RAM modules so that the content is preserved for a longer time, extract RAM modules and plug them in some device that will dump their content”. This doesn’t sound much more complicated or unreliable than the LPC attack you described, but indeed I forgot about soldered modules :)
If I were investigating a live system running Tails, I would use a cold boot attack only as an absolute last resort because, especially with modern memory, it is very unreliable and you only get one shot at it. It’s also a little more difficult on DDR3/DDR4 because you have to write software to attack the LFSR scrambler seed (trivial to do with a few bytes of known plaintext, but still requires at least a minimal knowledge of cryptanalysis). It’s even harder with ECC which requires being initialized to a known value at boot, necessitating a custom BIOS to extract the memory. With DMA, you can spend all the time you want trying various attacks without worry that your first attempt will be your last.
> So, let’s now assume that we would like to try & protect against attacks over the LPC bus. How could we do that? Re-reading this ticket, it’s not clear to me.
It would require enabling the IOMMU (e.g. with `intel_iommu=on`), which I believe was turned down previously because some laptops’ DMAR tables are corrupt and result in boot problems when the IOMMU is enforced (this is the same reason Debian doesn’t enable it by default, and Tails’ stance of keeping their delta very low hinders these sorts of things).
u wrote:
> ping?
Pong
#15 Updated by cypherpunks 2018-06-04 04:31:40
Note that LPC is slowly being deprecated and is being replaced with eSPI, which I know next to nothing about.
#16 Updated by intrigeri 2018-06-04 06:10:05
- Status changed from Confirmed to Rejected
>> So, let’s now assume that we would like to try & protect against attacks over the LPC bus. How could we do that? Re-reading this ticket, it’s not clear to me.
> It would require enabling the IOMMU (e.g. with `intel_iommu=on`), which I believe was turned down previously because some laptops’ DMAR tables are corrupt and result in boot problems when the IOMMU is enforced
Indeed, sadly we cannot do that because it breaks things on too much hardware :/
(On a non-Live system, having IOMMU enabled by default would be slightly less of a problem because once it’s identified as the root cause for a real world boot problem, the user can disable IOMMU once for all in their bootloader config. In the context of Tails, an affected user would have to do this every single time they start Tails.)
So my understanding is that protecting against attacks over the LPC bus requires enabling something we cannot enable ⇒ rejecting.
> Tails’ stance of keeping their delta very low hinders these sorts of things).
That’s unrelated to the iommu topic. We do set a number of custom kernel command line options and we can add more whenever it makes sense.
#17 Updated by cypherpunks 2018-06-05 00:48:21
intrigeri wrote:
> (On a non-Live system, having IOMMU enabled by default would be slightly less of a problem because once it’s identified as the root cause for a real world boot problem, the user can disable IOMMU once for all in their bootloader config. In the context of Tails, an affected user would have to do this every single time they start Tails.)
Perhaps a hardware whitelist using DMI in the bootloader could work here?
> So my understanding is that protecting against attacks over the LPC bus requires enabling something we cannot enable ⇒ rejecting.
Fair enough.
#18 Updated by intrigeri 2018-06-05 07:44:34
> Perhaps a hardware whitelist using DMI in the bootloader could work here?
Well, if someone (a cross-distro effort I guess) has the resources to maintain that list, then presumably it could be used directly in the kernel to enable IOMMU by default except for such hardware (and the kernel cmdline options could still be used to force enabled/disabled state), which would avoid everyone the need to maintain config hacks for N bootloaders :)
#19 Updated by cypherpunks 2018-06-07 04:07:17
intrigeri wrote:
> > Perhaps a hardware whitelist using DMI in the bootloader could work here?
>
> Well, if someone (a cross-distro effort I guess) has the resources to maintain that list, then presumably it could be used directly in the kernel to enable IOMMU by default except for such hardware (and the kernel cmdline options could still be used to force enabled/disabled state), which would avoid everyone the need to maintain config hacks for N bootloaders :)
Sort of like the various quirks kernel option, good point! I’ll look into whether or not that is feasible.
#20 Updated by intrigeri 2018-06-07 05:47:20
> Sort of like the various quirks kernel option, good point! I’ll look into whether or not that is feasible.
:)
#21 Updated by intrigeri 2019-12-31 09:35:28
Note that since Tails 4.0, the Linux kernel we ship has INTEL_IOMMU_DEFAULT_ON_INTGPU_OFF
enabled. I think it’s the equivalent of intel_iommu=on,igfx_off
. I suspect it’s the cause of hardware support regressions (Bug #17380).