Should we have a debate as to whether or not Spectre mitigations matter for some (or all) desktop computers? I know that, theoretically, I could install a piece of software on my Linux box that is malware and could try to read my memory via those methods, but let's be honest - we're all mostly concerned with servers that run code for dozens or hundreds of different clients.
I'm a foil hat as much as the next - security is of the utmost concern to me, but for once I actually just don't care and would take the performance back on my local dev machine.
> Unfortunately "install a piece of software" also includes allowing javascript to run in your browser.
Per-process Spectre mitigations could be helpful there, but I don't understand the technical details to know whether that would be possible to implement. It would be nice to disable mitigations on a video editor and for gaming.
The way I understand it (not that well, admittedly), per-process mitigations would be all about keeping that process from reading other memory areas, not about protecting that process from others. Which is better than the reverse if your intention is to allow some processes to run random js.
Yeah, but in the context of games, most likely it means threaded code written in C or C++.
If anything Spectre has shown us that the only real mitigation is to go back to multiprocessing with IPC, with the extra hardware resources it entails, as the exploit exists regardless of the language for in-process memory.
The problem with spectre/meltdown is they don't use standard calls to ask for memory of other processes... that is the vulnerability.
If you were do to do per process mitigation it would need to prevent certain process execution flows from happening, because the vulnerability is a problem with the CPU pulling memory from the way speculative execution works. This actually impacts things that are typically protected and is much more impactful than people think... for example, because the bug allows reading from nearly any memory address space, you could also pull in things like crypto keys from the OS or passwords from memory.
Yeah, but if you can mitigate per process, you can keep the secrets of trusted code from the eyes of untrusted code by only affecting the performance of the untrusted processes. The way I understand it the "protection cost" would fall on potential attackers, not on potential victims (unless of course they are both, holders of secrets and runners of untrusted code).
Unfortunately, that is incorrect understanding. In the context of stuff like this, "untrusted code" can be so many different things, it isn't just being able to run an executable... The issue with these vulnerabilities is the processor "pre-runs/prefetches" certain things for performance, which would likely be run in the future, so that it has those results cached, or to increase the amount of instructions in a CPU cycle. The issue is that many of these things aren't direct executable code, but more like access patterns.
For example, a speculative execution vulnerability might exist when there is a string of data in memory that has a memcp pattern, with memory addresses that are valid. To speed up the execution of programs, the processor might proactively read the memory from that memcp pattern because it is sending a command to the memory controller already.
Many of the mitigation techniques are to flush the caches/buffers and be careful about memory reads, which are the precise things that actually take the performance hits. And one of the problems is since they are access patterns, the vulnerabilities can be the html renderer compiled with an application, an SVG rendering library or a mail client.
For example, lets say there are mitigations built into the JS library... if you build a specifically crafted SVG, you could create similar access patterns to get around it. Its a huge cat and mouse game if there aren't OS protections and it only takes a single app to lose your secrets stored in memory.
Opt-in per-process spectre mitigation is already the case for some of them, because the mitigations in question are way too costly.
Now it is not possible for every kind of mitigations, because e.g. patching the kernel between mitigated processes and unmitigated ones would be more costly than just always running the mitigations.
edit: thinking more about it: you could have crazy ideas like two versions of the whole kernel space always loaded :D not sure about the cache impact in this case though.
Yeah no disagreement there. I had totally forgotten about the JS POC - ugh!
The JavaScript argument is interesting to me in that it's already flawed. I suppose I'd rather focus on the security issues with browsers running code on my computer more than anything else since it's effectively the "but what about ___" answer to so many threads like this one.
I've seen a few other comments suggesting per-process rules to enable or disable branch protections. That's an interesting thought, especially considering you could apply it to either "trusted" or "untrusted" code depending on it's source.
I don't know about the wild, but if this were tuned (i.e. this requires a lot of work for the first byte, the rest are easy) for a HVT you wouldn't know.
The problem is that PoC is extracting data which the PoC itself created specifically to facilitate said extraction. AFAIK no one has created a PoC which can extract specific data which hasn't been constructed to facilitate the PoC.
Yes, but your fans would start spinning like mad. I kill any browser that does that. You’d have to execute a successful attack within a few seconds to pull it off. I think that’s a risk I’ll take.
If this is your security mechanism (chuckle), then attackers will just slow themselves down by duty cycling. Say, only attacking for 100ms at a time, then sleeping a second. You'd never know.
...making it even more unlikely the attack would find anything of value (or even recognisable as such) in a reasonable amount of time.
To use an analogy, these side-channel timing attacks are really a "looking for a needle in a haystack" (or heap...) situation, except that [1] you don't necessarily know what a needle looks like, and [2] the haystack is constantly changing. AFAIK all the PoCs shown so far relied on having a deep knowledge of the system and carefully constructed conditions.
If these attacks could undetectably dump all of your RAM in a few seconds, that would definitely be a huge concern. But they're more like being able to read a few bytes per second, from somewhere in the address space, with no idea what they are or where they're being read from, and no guarantee that they're even contiguous.
I agree that it's not good, yet anecdotally I've realized that a device has been compromised by running `top`, on Windows and on Linux. It's not a good heurestic because it's only disvoverable post-compromise.
I can’t tell you how many startup’s splash page animations have actually caused my PC fans to spin at maximum. What is this website possibly doing with all my available computing power?
That's the first time I have heard a number this high for these exploits. All prior numbers I've heard were many orders of magnitude smaller, more like byte/s. The article linked above cites 1 kB/s as novel.
That was a number I heard (Meltdown pre-mitigations) when the "oh shit" papers started dropping a few years ago, could be misremembering. I'm also still slightly inebriated so thank you for nerdsniping me (Lit Review time!)
They tried. The V8 team eventually gave up and said it was unwinnable.
What they did do is move tabs to their own process so they can take average of the operating systems protections. Yes you can read the memory of the process hosting the JavaScript, but now there isn't anything interesting in it. Google's security team released a proof of concept attack that can read the memory in the renderer in many systems.
Man mobile keyboards suck. Any idiot knows the word average doesn't work there (at minimum you'd have to preface it with the word "the"), so why can't my keyboard run an ML model that's not an idiot?
Didn't even notice until reading this comment! I think my brain said, "starts with an A, got a V near the beginning, ends in AGE, must be ADVANTAGE in this context."
Mitigations for inter-process side channels address the issue of local applications attacking each other. That includes your web browser, the JS in it, or any other ad-laden crapware attack your local applications, e.g. to steal credit cards, SSH keys, etc.
Side-channels are pernicious. In the limit, they give applications unfettered read access across protection boundaries. If we don't shut them down, we might as well throw out the whole UNIX process boundary security model.
Ask yourself, would it be fine if every process had a 4KB/s (basically dialup speed) connection to read any desired byte of another process's address space?
Of course not. Thus, we need mitigations to shut these channels down.
address the issue of local applications attacking each other.
IMHO it's stupid to even try to isolate processes to that extent, as it's a really deep rabbithole that'll lead to worse performance and dubious increases in actual security. The best defense is to simply make everything running on the system be trusted.
Process protection boundaries should be for protection against accidental cross-process corruption, a form of reliability, and nothing more. That's effectively what the early 286/386 documentation stated, so Intel never even intended these protections to be defenses against side-channels in the first place.
Of course, the "security industry" needs to keep creating paranoia-fuel to justify their existence...
> Ask yourself, would it be fine if every process had a 4KB/s (basically dialup speed) connection to read any desired byte of another process's address space?
> Of course not.
So if it's not OK for processes to read each others' address spaces, does that mean it's not OK to attach gdb to a running process to figure out where it's stuck at, without running gdb as root? I chose to reenable ptrace among sibling processes on my system out of convenience, and hopefully it's not too much of a vulnerability. (I also chose to enable passwordless sudo, which is convenient, but probably dangerous as well. I wonder if I can use my hardware security key for sudo instead.)
Multiple Spectre variants (RDCL, RSRR, Lazy FP state restore, SpectreRSB) bypass process boundaries. It doesn’t matter what IPC mechanism you use, they can read arbitrary privileged memory no matter who owns it.
IPC really has nothing to do with anything, Spectre-wise; you don’t have to be using any IPC mechanism in either the attacker or attackee process to be vulnerable to these variants.
yes, I am interested in this.. basically I have mitigations OFF and also, do not run a web browser on the base OS. (Debian/Ubuntu here) in VMs I do run the browser, with two or three in use daily..
I'm a foil hat as much as the next - security is of the utmost concern to me, but for once I actually just don't care and would take the performance back on my local dev machine.