In case you've ever wanted an opportunity to learn Zig and WebGPU, we have Zig bindings to Dawn (the Chrome implementation) with a fair amount of examples in Zig[0] - also working on getting WebAssembly support going right now.
Can be a bit easier than trying it out in-browser (works on Linux, etc.)
I'm shamelessly posting here because we're looking for folks[1] to contribute by porting more webgpu examples over to Zig :) Can be a fun way to get started with both, though.
> Why it is this underestimated and underappreciated
Wait... when has it been underappreciated?
Besides things like ChromeOS, many "platform native" apps have moved to become essentially wrappers around web apps. It's pretty safe to say that what was originally built as web technology, has essentially become the modern baseline of cross platform development (HTML/CSS for specifying interface layout, JS for business logic and WASM/WebGPU to take advantage of extra hardware capabilities under a generic wrapper) and it's only natural that more capabilities are desired of it.
I think what's lacking right now is more reflection on how to deal with the security and privacy implications in a future where applications are all essentially zero-click installs. WASM has already been used for cryptojacking, it's not that hard to imagine WebGPU being used for that too. The security implications have been the bane of earlier web distribution of code. From ActiveX to Java Applets to Flash, the history of web distribution (and security flaws with them) is a very long one...
WebGPU and WASM are not like a camera and microphone where you can just pop up a permission dialog to turn them on/off. In order for them to gain wider adoption (rather than end up being a security liability like Flash), we need to think about how a user may be able to control granular access of these resources for what are essentially untrusted (and opaque) applications.
I agree with you on this, but I don’t think it’s being underestimated or underappreciated. WASM comes up in almost every design discussion I have with people where we’re considering doing heavyweight data processing or graphics stuff in the browser. Sure you’ll find a couple doubters here and there (like the “it’s just Flash” person ITT) but overall I feel like these technologies are “correctly rated”.
> I don't understand why it is this underestimated and underappreciated AR content / Games, that'll be huge
Lol, they've been saying that since 3dfx and VRML and three decades later, still nobody cares. It's just marketing hogwash from desperate executives. Why would it be earth shattering? It's just games. Faster computers. Incremental graphics. Yay? It's nothing revolutionary.
Flash was a closed standard that required proprietary tools to produce for and had a single closed source runtime (that was buggy and had frequent severe security issues.) The web platform has strikingly little in common with Flash, from the top level decisions to the bottom level details. The similarities, are purely superficial.
Flash piled on features well after it's heyday. In a lot of cases, it's pretty likely that your first time experiencing some of the more advanced capabilities would be with the web platform equivalent rather than Flash Player. I never got the impression that the sole reason for adding features to the web was just to replace similar Flash functionality; even during Flash's heyday, alternatives to what it had to offer were prevalent at points.
The closest I think the web platform comes to actually copying Flash was actually the SVG standard, which has some really peculiar features... Like raw sockets. I think it's hard to deny that that really is something motivated by and for the purpose of competing with Flash. However, if my recollection serves correctly, that was done by Adobe prior to acquiring Macromedia.
This is only approximately true at the engineering level, but at a higher level it’s the exact opposite of true. A widely supported, open standard like this is a fundamentally distinct thing, even if it hypothetically produced the exact same technological capabilities on a process-level. (Which again, these technologies exceed Flash in terms of scope.)
I do lament Flash though, because we would have gotten to this exact spot sooner if a better transitionary path was navigated, but more importantly losing the content and skill base of the Flash era was a huge loss.
Ah yes, modern web technology. A standard so widely supported users can choose between Google Chrome, the Chrome reskin by Microsoft, the Chrome reskin by Apple - and even the privacy-conscious Chrome reskin by Brave.
A standard so open any stakeholder can push forward the cutting edge of the standard by proposing features and getting them widely adopted, which for some reason only happens to things invented at Google.
It's nothing like Flash - it's much more like OOXML.
Firefox invented and is still leading development of wasm. And Safari is really only distantly related to Chrome these days, their last common ancestor was in 2013. Wasm wasn't even announced until 2015. Safari's JavaScriptCore replaced the JS JIT in 2016, and V8 has had two new compiler pipelines since then.
So I clicked that link, saw “open source” and went “Oh crap, there was an open way to write, compile and run Flash and we ditched it?!” But then I read on and realized: No, this is an open source tool to compile C/C++ code to run on Flash, a closed-source runtime.
Back in the 2000s when I was a teenager with virtually $0 in personal savings (I had the family computer, but no budget for buying professional software) I wished I could make the kinds of Flash games I played in the browser. The fact that Flash required expensive software (I wasn’t comfortable pirating software until I had my own computer that I could risk on my own) meant I couldn’t participate.
I’m admittedly a bit jealous that today’s kids can get a programming environment set up for free (once they have a computer) and make games and whatnot that will run in the browser that they can share with their friends. But deep down I’m happy for them: The world of programming is deeply imperfect, but it’s better than it was when I was growing up, and that is in fact progress.
Anyone knows what will be the usual performance overhead of doing webgpu in a browser vs. desktop software running a gpu kernel ? I know the features are different, but we could take simple examples to compare (game of life, gaussian blur, mandelbrot etc.)
If you're talking about any game released in the last 5 years (you probably are), then WebGPU is pretty much fine for anything you could throw at it. Your limitation is going to be your usage of it far before any limitation of the actual API.
WebGPU as an API is basically just the union of Vulkan, Metal, and DirectX 12.
If you're using it natively (i.e. using it as a graphics abstraction on a native platform, not in a browser), it's very little overhead compared to using those other APIs directly.
Where things start to differ:
* When used in browsers, WASM->JS->C API calls have overhead. Not much, but some.
* WGSL compilers I believe aren't super optimized right now. In theory they could be almost as fast as SPIR-V -> target, but for now I think they prefer safety over speed.
* Multi-threading support: using the webgpu API across threads, as far as I know is not supported yet but would come after V1. This isn't a huge deal IMO, you can just use it in one thread dedicated to rendering.
* Hyper-modern features, like Mesh shaders (which IIRC are only available in DX12, and in Vulkan only available on Nvidia cards), or whatever Metal 3 provides.
I think WebDNN[0] is working on enabling this sort of thing using WebGPU compute as a backend, but no experience with ML myself sorry. I do think WebGPU's compute API would be sufficient for ML inference, but can't say for sure.
what's the use scenario here that I use a browser for some AI work? is this for WASM-c++-gaming-engine? otherwise I probably just write the matrix multiplication in c++/python/even-nodejs natively without the browser as the interface?
> As GPU sandboxing isn't implemented yet for the WebGPU API, it is possible to read GPU data for other processes! Don't browse the web with it enabled.
Can one effectively isolate WebGPU in a sandbox? And even if that is the case, would it leave open the possibility for fingerprinting?
Can be a bit easier than trying it out in-browser (works on Linux, etc.)
I'm shamelessly posting here because we're looking for folks[1] to contribute by porting more webgpu examples over to Zig :) Can be a fun way to get started with both, though.
[0] https://machengine.org/gpu/
[1] https://github.com/hexops/mach/issues/230