I know this is a bit a pet peeve of mine, (and not a very popular opinion) but I think the browsers are doing too much, and WebAudio is a very good example of that.
Instead of standardizing something low-level to input/output audio and query the hardware, like an OS; it standardizes soooo many things and filters, including downmixers, panners, quadfilters and Convolution (for reverbs); which is where the issue is in this exploit.
The resulting complexity is huge, almost impossible to implement correctly in a cross-browser way, and a lot more code in C/C++ is written.
I do advocate simpler APIs for audio, low-level, and let JS or Wasm do the filtering, using the JS sandboxing model.
And, of course, this does not apply only to WebAudio, but I think WebAudio is a good symptom.
It's not just you. Browsers are systems-within-systems. They're gigantic, illustrated by the fact that the number of implementations that are well maintained can be counted on one hand. A lot of people take issue with that.
Or Microsoft did a back of the napkin calculation and realised that the potential increase in user base for Microsoft products wasn’t worth the money.
Their users are already on Windows and Microsoft already has telemetry on them. And we’re talking about the browser guts and not the surface, which is where most of UX lives. But none of this means that Microsoft is browser incompetent.
MacOS’s media framework uses sandboxes so even if media gets exploited it wouldn’t be able to write to disk or launch processes like in this example.
There’s really no excuses in this case for WebAudio, of all things, to be able to write to disk. And it’s dubious that Chrome should be able to launch any processes outside of the few specific child jobs it needs for process isolation.
We need something like `pledge` from BSD available across all platforms. Would mitigate quite a few of these exploits. Though, shipping good SELinux policy could solve the problem as well, maybe even better.
wasm is only a new phenomenon and those APIs were designed without wasm in mind. Note that wasm is also slower and thus more resource (battery) consuming than a native C++ implementation. Also, Chrome is already adding audioworklets to WebAudio so it will become customizeable. Last but not least, browsers are at least a central place that can be patched. Compare this to a possible future where vulnerable wasm libraries are being used in web apps, allowing third parties to run cross site wasm based exploits. Websites can't even switch off sms based 2fA, would you expect them to keep their wasm libraries up to date?
I disagree. By having the browser implement these features we gain security. Chrome's sandbox is extremely powerful, and they can isolate and restrict all of those features you're describing very deliberately, in a way that a more generalized OS provided tool may not be able to.
I completely disagree. I think we've already seen with the JS-framework-of-the-day culture, flash, java applets and whatnot what happens when a browser doesn't provide the right level of abstraction for the developers. The developer tools are still years behind what developers need and if the browser doesn't provide them, a million npm dependencies will.
The issue is that a lot of the use cases for the processes you mention have soft real time requirements where GC pauses are unacceptable, and the underlying OS's have differing world views when it comes to dealing with audio (even on the same OS, there may be different driver APIs with fundamentally different approaches to audio I/O!).
So having a simple audio API + ability to do pro-audio like things in JS is not really possible in a browser. They have to expose a complex framework with a nuanced API to the underlying engine to make some of this stuff even possible in a web browser.
> "The exploit used a race condition bug between two threads due to missing proper synchronization between them. It gives an attacker a Use-After-Free (UaF) condition that is very dangerous because it can lead to code execution scenarios"
This is why C++ needs to be retired; and why we need to use safer languages. Not even Google can write safe C++. Thankfully Mozilla have already realised this.
> This is why C++ needs to be retired; and why we need to use safer languages.
These problems can easily happen in a language like Go, unfortunately. Go is memory safe for sequential code, but that guarantee does not extend to in-process concurrency - Goroutines can access shared state without synchronization, and create memory unsafety.
But not in Rust. Go isn't exactly a bastion of safety. It's memory safe due to the GC, but it has a weak type system, so logic bugs are not well guarded against.
Disagree on this. In Rust I can read most of the information I need to know straight from the function signatures. In Go I have to go through every line of code (and there are often a lot of lines due to Go's lack of expressive power).
I guess it's a matter of preference. If something is abstracted, then you only have to audit it once (and often you can count on someone else having audited it). With Go code, everything is new and unique, so there are a lot more opportunities to introduce bugs.
Mutability / immutability is in the function signatures, and that goes a long way. It's not as good as an effect system of course. But along with global variables barely being a thing in Rust, it goes a long way to cutting out the common course of bugs.
Go is not memory-safe in the presence of concurrency. Interfaces and slices (i.e. fat pointers) are not written atomically, which can cause undefined behavior.
I haven't heard of this causing a problem in practice, though.
Slice values are normally copied so it's not an issue. The slice would need to be stored in a global or a struct's field and then changed while some other goroutine is reading it.
Go has a few escape hatches like the unsafe package, but outside from that, I'm pretty sure arbexec isn't achievable in Go. The compiler inserts bounds check to prevent overflows, UAF isn't really achievable thanks to the GC, etc...
You can have UAF in Go due to slicing in the presence of concurrency: the implementation doesn't use multi-word atomics to update fat pointers like slices and interfaces. That said, it's not easy to do so.
A 'stale' read of atomic data may be fine (it wouldn't really be "unsynchronized" in that sense), but as soon as you're dealing with stuff that's not completely atomic in-hardware and need actual synchronization, there's a potential of breaking expected invariants and creating further unsafety as a consequence of that.
I agree a stale read is bad. And I am surprised that the Go designers have repeated many past mistakes. It is possible to design a concurrent high-level language that does not have such issues, despite the hardware. For example, if all communication is done via channels, then synchronisation can be automatically performed during channel read/write. For shared mutable state, Haskell's software transactional memory and Erlang's Mnesia are both excellent examples of safely dealing with it.
The issue wrt. Go is not with communication via channels in and of itself, but rather with using channels to share pointers to in-memory data. This is idiomatic in Go, people do it all the time even though it's what introduces unsafety.
Yes, but my point is that it's a flaw in the language to allow this. Not all high-level languages share Go's problems. However, even Go would be a huge safety improvement over C++.
Go is actually one of the best in terms of design. Not every language has to be so complex. Go is easy to use and that's the point. Concurrency in it is best in class in many regards and that's what allowed the language to pretty much became THE language for backend, infrastructure and other web related things.
Rust is in completely different spectrum and doesn't compete with Go. It's much more complex, harder to write and read. That's the cost of safety. For kernel or browser engine - that's the compromise people are willing to take. For other applications - better use something else.
> Go is actually one of the best in terms of design.
I respectfully disagree. Go's primary goal was to be (superficially) easy to use and familiar. Because of this, it fails to tackle the big issue with doing concurrency in most prior mainstream languages. Go has mutable state deeply baked into the language and makes it difficult to work with immutable data. Go is not necessarily easy to use if ones goal is to build correct concurrent software.
Go's call-by-value semantics give you similar benefits. Just different tradeoffs (because there are downsides to immutable data structures).
My understanding of Go's design aesthetic is that it prefers to be explicit about things that could impact performance, which is probably why it prefers simple data structures with explicit synchronization.
You don't need to go that far. Go could have been memory-safe if slices were one word instead of three (like Java's arrays), and interfaces were one word instead of two (like Java's interfaces).
No they can't. UaF is impossible in memory safe languages like Go. Concurrency doesn't matter here. Only thing you would get is logical errors resulting in crashes or weird behavior.
Javascript, that the exploit uses, is a memory safe language too, at lest in theory...
None of the languages around, that are in major production use, are really completely memory safe in practise. There is always a runtime lurking under it, most times a libc (not in Go tho, IIRC) and assortment of various libraries written in memory unsafe languages and always a kernel.
While it's probably harder to exploit Go, as, as far as I know, most of the Go runtime is written in Go, it's not impossible, just a lot less likely.
Javascript is used to perform the exploit. The renderer, where the vulnerable code exists (I assume), is C++.
The problem with protecting a C++ codebase like a JS renderer is that the attacker has HUGE amounts of control. They can literally already execute arbitrary code in the renderer, making information leaks and other techniques much easier. ASLR was never intended to protect against an attacker with such a level of control.
> Javascript is used to perform the exploit. The renderer, where the vulnerable code exists (I assume), is C++.
Yes, that way my point. There is a runtime and/or VM and/or stdlib under these "memory safe" languages and these things are never fully written in memory safe languages themselves, at least as far as the current state of affairs goes. And even then, they run on insecure hardware (rowhammer and spectre anybody?).
Would using a language with better memory safety and other guarantees like Rust be better to write these things? Most certainly, as the attack surface becomes smaller and e.g. Rust would prevent a lot of programming mistakes in the first place. Would that be a complete silver bullet, tho? Nope.
So I agree with you that using a language with C++ isn't exactly "optimal" to implement these runtimes that are meant to run untrusted code. I just didn't like that the OP declared "memory safe languages like Go" to be a silver bullet.
>ASLR was never intended to protect against an attacker with such a level of control.
ASLR is not a protection. It's an additional roadblock put in place to make things harder for attackers once shit already hit the fan.
> There is a runtime and/or VM and/or stdlib under these "memory safe" languages and these things are never fully written in memory safe languages themselves, at least as far as the current state of affairs goes.
The ideal solution is to go beyond safe languages, and use formally verified compilers and interpreters.
'CompCert', for instance, is a formally verified C compiler [0]. (Strictly, it's a compiler for a very-nearly-complete subset of standard C.) (As an aside, I Googled for a formally verified Ada compiler, but I couldn't see any sign of one. I find that surprising.)
No reason the same couldn't be done for Rust, and/or its standard library. Would be a lot of work, of course, but it would close the door on some of the bugs you have in mind.
It wouldn't save you from operating-system bugs, but formally verified operating systems are a possibility too. [1]
> And even then, they run on insecure hardware (rowhammer and spectre anybody?).
True, but I think it's safe to say that insecure hardware isn't usually near the top of our practical security concerns. I imagine one can greatly reduce the risk of such vulnerabilities if high-performance isn't required, and AMD/Intel aren't the only options.
> Would that be a complete silver bullet, tho? Nope.
I don't think anyone would disagree!
> So I agree with you that using a language with C++ isn't exactly "optimal"
Yeah, I think the distinction here is that I don't consider it "suboptimal" I consider it to be an absolute disaster. I would call rust "suboptimal" in that the language contains some soundness holes and the stdlib contains unsafe - issues, but practically still a massive improvement.
Even if a race condition bug does sneak in, it won't be as catastrophic as C/C++.
Writing concurrent code is hard in any language. There's no magic bullet & Go or Rust are not immune to concurrency bugs. If anything, tools like the race detector should always come built in and every dev working on concurrent code should use one.
GC does not provide memory safety at a system level, it has the same issues as C++ in poorly programmed code once compiled. Also C++ has "memory safety" idioms similar to garbage collection through reference counting and RAII.
Google has a C++ race detector based on the Go one and it did not detect this issue (or was ignored).
Neither of those things you listed reduce the severity of the bug, and both are available in well written C++ code as google tends to do.
What do you mean? If you don't use unsafe package you can't have the same problem as C/C++ so it is memory safe. The bug in Chrome can't happen in Go without unsafe usage.
Go compiles down to machine code, it has a stack and heap, it has structures on that stack and heap, and it has data races; those are all of the required components for a use after free. You can have data races compiled by the stock go compiler without the use of unsafe (also, why have race detector tooling if this is false?). It would probably be easier to exploit than here even because one wouldn't even need to bother with the ASLR pointer exfiltration they had to do here.
It may be harder to write exploitable code, but memory safety does not guarantee runtime memory safety. Especially in the face of concurrency.
The race condition exploits you're thinking of are not trivial and need specific code to pull off, unlikely to be a mistake like in C/C++. You just don't find them in normal Go programs.
Security bugs are unavoidable, Rust/Java or not. This calls for architectural defense in depth.
A lot of the exploits would be rendered harmless by employing well established technique of splitting one huge monolithic app into processes with limited capabilities and communicating via simple and narrow channels. Qmail[1] is a good example of such architecture. The architectural support is already present in common OSes.
Maybe. I'm happy to run any formally verified C as my basement, for example the seL4 kernel, or for C to be used as an intermediate language. Especially as there should be less code at the bottom of the stack. But C++ seems worse, it's a far more complex language, designed and built to try and scale C to building applications. This creates a far larger attack surface.
Web browsers are unfortunately a case where even memory safe-ish languages are really an incomplete solution. Writing a JavaScript JIT in Rust won't prevent optimizer bugs.
While this is true, a lot of exploitable bugs attributed to the JS engines are actually in implementations of built-in methods (String.split, Array.join, etc.). These can benefit from memory-safe languages.
Occasionally you do just end up with straight-up exploitable optimizer bugs, though. The literature has some powerful techniques to let us get rid of those too over time, but it's not as simple as rewrite-it-in-Rust. One thing that might help is simple layering: if the JIT could compile to something like wasm as an intermediate target, then you would have an extra layer of defense against optimizer bugs. This is something that's been talked about, but I don't know if anyone has seriously looked into how practical it is.
This is why I started disabling js by default using ublock origin. I whitelist the websites that I'm interested in but at least I avoid a lots of exploits that requires js. This is in case if I'm unlucky and I land in one of those websites with malicious js. I discovered duckduckgo doesn't block websites as eagerly as Google and landed in a few tricky places.
If you want Google Search without Google, there's Startpage. It acts as a proxy between you and Google, protecting your privacy. I've been using it exclusively for maybe two years now. I am a bit upset at not supporting the more involved privacy-friendly projects, like DuckDuckGo or others (there are a quite a few); but, Startpage gives me the right mix of privacy, efficiency and social awareness at the moment.
I wonder if this exploit would be possible if Chrome was packaged using the UWP Desktop Bridge on Windows 10. I doubt the sandbox prevents UAF, but surely it would have prevented executing an EXE file.
Is there anyway to test this on other chrome based browsers?
For example current version of Iridium is 2019.04.73.0(based on Chromium 73.0.3683.103), it doesn't get updated that often but a useful and stable browser.
Anyway to mitigate this exploit via any setting or extension?
It’s checking for chrome versions below the current, 78, so update to that. You also have to be on Windows. On windows, make sure you have your antivirus setup to block random executables from download and execution. I’d also block those hosts at the firewall and dns. It gets more complicated from there if you’re infected. But you probably aren’t.
Edit-just realized you actually asked for how to tell if you were infected. Check the windows task scheduler for unknown tasks. It installs items there for persistence.
Edit-search your history and hard drive for “behindcorona” domains. That’s where it loads things from. There are more specifics in the page.
The CVE number was reserved in July[1]. That doesn't mean the vulnerability was found in July. The vulnerability was reported to Google October 29[2], so between report to fix roll out was only ~2 days. Although the bug tracker entry is hidden[3], you can deduce the time the entry was created by iterating over bug IDs, which indicates it was filed between 8:56 AM PDT[4] and 9:11 AM PDT[5].
The fix was sent for review Oct 29 4:29 PM PDT and submitted at 5:47 PM[6]. It was cherrypicked Oct 30 at 9:51 AM[7].
>Date Entry Created
> 20190718
> Disclaimer: The entry creation date may reflect when the CVE ID was allocated or reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE.
Tragedy of our generation is that people who are so smart and determined to find and exploit these vulnerabilities, can't find better uses for their talents.
Or, maybe downloading arbitrary code from the internet without even a same-origin check, and being able to dynamically construct executable JavaScript from downloaded binary data masked as image and then have a whole generation of developers depend on this model wasn't so good an idea after all.
eval() has no economic purpose. It hasn't been removed by browsers purely because that would break backward compatibility.
Non-same origin Javascript does not have economic purpose either. I believe, that it will be completely eradicated from browsers within this century (if not in couple of decades).
Well the smartest talents of the previous generations built and exploded hundreds of nuclear weapons including a couple on innocent civilians, created ICBMs that can destroy civilization in a few minutes and created situations that brought the world to a brink several times, drove a massive population explosion anda fossil fuel consumption binge that may yet destroy civilization.
So I definitely think the smartest of more recent generations who are employing their skills in these software hacks and ad tracking are perhaps saintly in comparison :-)
Sadly, i had a big argue with my team and they kept pressing me into doing exactly the first thing. "Success depends on optimizing the sales and optimization is done by money/brains metric". Something i'm not really ready to accept (yet).
In this day and age, it could also be well paid nation state hackers. It is well understood that places like the NSA actively seek out zero day exploits and use them to perform whatever operations they need to do. They don't report them to software devs that can patch them.
Why would they. People who break DRMs for fun and profit look with disdain at their whitehat counterparts.
Same with ad fraud. People in that particular business take pride at being able to fool boatload of best and brightest computer scientists who work at bot detection
As for people in good old browser exploit scene, I think they have good time laughing at vain efforts, and ineptness of their counterparts too
I think people are wrong thinking of it as a money thing
As I understand these RCEs (in general; perhaps not this particular one), a frequent root cause seems to be saving instruction pointer on the stack adjacent to untrustworthy data which may have propagated down the call chain. Are there no initiatives being attempted to change this convention? Like not save the IP on the stack? Perhaps at the CPU architecture level or at the compiler level?
Control flow integrity is the group of techniques meant to deal with this problem. There are settings for Microsoft’s cpp compiler to enable them but the best way to accomplish this is with a hardware shadow stack. Intel cet does this but I’m not sure when the hardware will be available.
There are already many modern mitigations present to defeat the kind of stack overflow control of the instruction pointer that you're imagining -- stack canaries, DEP, ASLR, NX, and others.
Simply gaining control of the instruction pointer through a stack overflow as you describe stopped working a decade or so ago due to these mitigations.
This is completely unrelated to this class of issue. You’re thinking of stack-based buffer overlows. This is all about heap manipulation, and doesn’t involve an overflow. Not in that sense anyway.
Instead of standardizing something low-level to input/output audio and query the hardware, like an OS; it standardizes soooo many things and filters, including downmixers, panners, quadfilters and Convolution (for reverbs); which is where the issue is in this exploit.
The resulting complexity is huge, almost impossible to implement correctly in a cross-browser way, and a lot more code in C/C++ is written.
I do advocate simpler APIs for audio, low-level, and let JS or Wasm do the filtering, using the JS sandboxing model.
And, of course, this does not apply only to WebAudio, but I think WebAudio is a good symptom.