Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm trying to understand the basic idea of Radiance Cascades (I don't know much about game development and ray tracing).

Is the key idea the fact that light intensity and shadowing require more resolution near the light source and lower resolution far from it?

So you have higher probe density nearby the light source and then relax it as distance increases minimising the number of radiance collection points?

Also using interpolation eliminates a lot of the calculations.

Does this make any sense? I'm sure there's a lot more detail, but I was looking for a bird's eye understanding that I can keep in the back of my mind.



Essentially yes.

There's ambient occlusion that computes light intensity with high spatial resolution, but completely handwaves the direction the light is coming from. OTOH there are environment maps that are rendered from a single location, so they have no spatial resolution, but have precise light intensity for every angle. Cascade Radiance observes that these two techniques are two extremes of spatial vs angular resolution trade-off, and it's possible to render any spatial vs angular trade-off in between.

Getting information about light from all angles at all points would cost (all sample points × all angles), but Radiance Cascades computes and combines (very few sample points × all angles) + (some sample points × some angles) + (all sample points × very few angles), which works out to be much cheaper, and is still sufficient to render shadows accurately if the light sources are not too small.


So I've been reading

https://graphicscodex.com/app/app.html

and

https://mini.gmshaders.com/p/radiance-cascades

so I could have a basic grasp of classical rendering theory.

I made some assumptions:

1. There's an isometric top-down virtual camera just above the player

2. The Radiance Cascades stack on top of each other, incresing probe density as they get closer to the objects and players

I suspect part of the increased algorithm efficiency results from:

1. The downsampling of radiance measuring at some of the levels

2. At higher probe density levels, ray tracing to collect radiance measurements involves less computation than classic long path ray tracing

But I'm still confused about what exactly in the "virtual 3D world" is being downsampled and what the penumbra theory has to do with all thus.

I've gained a huge respect for game developers though - this is not eady stuff to grasp.


Path tracing techniques usually focus on finding the most useful rays to trace, to focus only rays that hit a light (importance sampling).

RC is different, at least in 2D and screen-space 3D. It brute-force traces fixed sets of rays in regular grids, regardless of what is in the scene. There is no attempt to be clever about picking the best locations and best rays. It just traces the exact same set of rays every frame.

Full 3D RC is still too expensive beyond voxels with Minecraft's chunkiness. There's SPWI RC that is more like other real-time raytracing techniques: traces rays in the 3D world, but not exhaustively, only from positions visible on screen (known as Froxels and Surfels elsewhere).

Penumbra hypothesis is an observation that hard shadows require high resolution to avoid looking pixelated, but soft shadows can be approximated with bilinear interpolation of low-res data.

RC adjusts its sampling resolution to be the worst resolution it can get away with, so that edges of soft shadows that are going from dark to light are all done by interpolation of just two samples.


Thanks for taking the time to provide more details on how resonance cascades work.


I didn't get it either, I found this though which seems to be a much better fundamentals of radiance cascades: https://mini.gmshaders.com/p/radiance-cascades

IIUC basically you have a quad/oct-tree of probes throughout the area of screen space (or volume of view frustum?). The fine level uses faster measurements, and the broad level uses more intensive measurements. The number of levels/fineness determines resolution.

I guess for comparison:

- Radiance cascades: complexity based on resolution + view volume; can have leakage and other artifacts

- Ray tracing: complexity based on number of light sources, screen resolution, and noise reduction; has noise

- RTX: ??

- Radiosity: complexity based on surface area of scene

Also not sure, but I guess ray tracing + radiosity are harder to do in GPU?


No octree/quadtree. It's just a stacked grid of textures, halving resolution (or otherwise reducing) each level. Low resolution layers capture many rays (lowest resolution say 4096 rays) and longer distances at lower spatial resolution. High resolution layers capture fewer rays (lowest say 4 rays) for shorter distances at high spatial resolution. When you merge them all together, you get cheap, soft shadows and a globally illuminated scene. In the post I wrote, I calculated it's similar in cost to firing 16 rays using a classic path tracer. But in theory should look similar to 4096 rays (or whatever the highest cascade layer does) but softer shadows.

Depending on your approach the geometry of the scene is completely irrelevant. (Fixed step / DDA truly, JFA + DF has some dependence due to circle marching, but largely independent)


Thanks for the analysis and the insights - I guess I'll have to parse all of this a bite at a time :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: