There's ambient occlusion that computes light intensity with high spatial resolution, but completely handwaves the direction the light is coming from. OTOH there are environment maps that are rendered from a single location, so they have no spatial resolution, but have precise light intensity for every angle. Cascade Radiance observes that these two techniques are two extremes of spatial vs angular resolution trade-off, and it's possible to render any spatial vs angular trade-off in between.
Getting information about light from all angles at all points would cost (all sample points × all angles), but Radiance Cascades computes and combines (very few sample points × all angles) + (some sample points × some angles) + (all sample points × very few angles), which works out to be much cheaper, and is still sufficient to render shadows accurately if the light sources are not too small.
Path tracing techniques usually focus on finding the most useful rays to trace, to focus only rays that hit a light (importance sampling).
RC is different, at least in 2D and screen-space 3D. It brute-force traces fixed sets of rays in regular grids, regardless of what is in the scene. There is no attempt to be clever about picking the best locations and best rays. It just traces the exact same set of rays every frame.
Full 3D RC is still too expensive beyond voxels with Minecraft's chunkiness. There's SPWI RC that is more like other real-time raytracing techniques: traces rays in the 3D world, but not exhaustively, only from positions visible on screen (known as Froxels and Surfels elsewhere).
Penumbra hypothesis is an observation that hard shadows require high resolution to avoid looking pixelated, but soft shadows can be approximated with bilinear interpolation of low-res data.
RC adjusts its sampling resolution to be the worst resolution it can get away with, so that edges of soft shadows that are going from dark to light are all done by interpolation of just two samples.
IIUC basically you have a quad/oct-tree of probes throughout the area of screen space (or volume of view frustum?). The fine level uses faster measurements, and the broad level uses more intensive measurements. The number of levels/fineness determines resolution.
I guess for comparison:
- Radiance cascades: complexity based on resolution + view volume; can have leakage and other artifacts
- Ray tracing: complexity based on number of light sources, screen resolution, and noise reduction; has noise
- RTX: ??
- Radiosity: complexity based on surface area of scene
Also not sure, but I guess ray tracing + radiosity are harder to do in GPU?
No octree/quadtree. It's just a stacked grid of textures, halving resolution (or otherwise reducing) each level. Low resolution layers capture many rays (lowest resolution say 4096 rays) and longer distances at lower spatial resolution. High resolution layers capture fewer rays (lowest say 4 rays) for shorter distances at high spatial resolution. When you merge them all together, you get cheap, soft shadows and a globally illuminated scene. In the post I wrote, I calculated it's similar in cost to firing 16 rays using a classic path tracer. But in theory should look similar to 4096 rays (or whatever the highest cascade layer does) but softer shadows.
Depending on your approach the geometry of the scene is completely irrelevant. (Fixed step / DDA truly, JFA + DF has some dependence due to circle marching, but largely independent)
Is the key idea the fact that light intensity and shadowing require more resolution near the light source and lower resolution far from it?
So you have higher probe density nearby the light source and then relax it as distance increases minimising the number of radiance collection points?
Also using interpolation eliminates a lot of the calculations.
Does this make any sense? I'm sure there's a lot more detail, but I was looking for a bird's eye understanding that I can keep in the back of my mind.