Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would be so, so cool. I don't see this happening for a while unless a CAD vendor works with us but definitely where I see this tech.


How about 3D modeling with something like Blender?

BTW, the key to making stuff like that work well isn't just the UI affordances, but latency. Head tracking, room tracking, video passthrough for AR, all highly sensitive to latency. You can see the effect even in the GIFs on your landing page: in the AR mode the floating windows slightly lag the external view when the user moves their head.

Latency is what makes many people nauseous (ie. Cybersickness), but as far as I know no one actually benchmarks it for VR or AR headsets or for VR games, and headset hardware makers keep chasing higher resolutions (and secondarily framerate). Everyone is familiar with resolution vs. framerate vs. scene complexity (eg. "polygon counts") tradeoffs, but latency figures in as well and no one makes the effort to benchmark it (and let's face it, it's pretty hard, even in a research context) despite the huge impact it has on consumer acceptance. It's the elephant in the room everyone ignores except to blame someone else (software people blame the hardware, hardware folks blame the software, etc.).

Now, you're trying to make this a general productivity environment, so resolution is more important than usual to be able to read text without eyestrain, but for immersive gaming the available hardware has been more than good enough resolution wise for at least two Moore's Law cycles, possibly as many as five. The same can't be said for latency.

And there are much better uses of the GPU than just pumping out more pixels and complexity, particularly for AR use cases: object detection and tracking, pose estimation, object masking and removal/replacement, etc., and all of those are even more sensitive to latency than resolution compared to VR.


We do have a lot of considerations for latency. Early footage is well, early.

Head/room tracking is being worked on, but I'll keep people updated with results.

AR latency... we're basically just brute forcing it. >90 FPS cameras, dump the data uncompressed via PCIe Gen3 x4 onto the compute pack, display it. Still working on the hardware verification but it should be doable within a frame assuming there isn't a hidden bottleneck somewhere.


There are tools for latency measurement for VR applications. Before FB acquired them, Oculus had one for $99. But jitter's just as bad as latency. That usually required some thinking to work on.


It might be lower hanging fruit to start with something like a VR renderer for openscad - it would be amazing to be able to type openscad code into a floating window and see a live updating floating render of the object I am building next to that. I suppose the bounding box of the output could be considered a 3D vr window of sorts.


Might check out Gravity Sketch: https://www.gravitysketch.com/

At first brush it appears more like these other VR "sketching" apps than a serious CAD tool, but it's actually pretty feature rich in that direction and continuing to evolve.


The Blender / FreeCAD communities might bite.


Was thinking what GitHub does to preview STLs. I'm assuming WebGL with glTF exported... Meaning am ElectronJS app in transparent mode code render a 3D thing like a part to be 3D printed.


Yes! Some desktop 'widgets' that show 3d content from various forms.

I wouldn't go as far as fsv[1] but some basic in-environment vis is called for.

[1] http://fsv.sourceforge.net/screenshots/


Yeah, sure, definitely FreeCAD. OpenCASCADE failing to make fillets, blocking the main thread with long operations and crashing the entire application is exactly what I always wanted to see in VR!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: