In the field of computer graphics, radiosity denotes a class of algorithms which calculates the transfer of light in virtual three-dimensional geometries existing as data models in computer memory. Since radiosity algorithms are based on physical light phenomena, the resulting pictures have a very realistic appearance, and thus, radiosity has therefore found broad acceptance in fields such as architecture, virtual reality, and the film industry.
In order to compute the physics of light flow, virtually each individual light ray should be tracked through the scene, and the effects of reflection and absorption be considered. Due to the infinite number of these possible rays, this is impossible in practice, and some kind of approximation or discretization of the object surfaces must be applied. Usually, this is accomplished through a collection of closely located points, i.e. through a subdivision or meshing of the surface geometry. In the case of radiosity, meshing is not a trivial task, and whereas many classical radiosity approaches only find suboptimal solutions which often overload available computing resources, this work presents a novel view on the discretization issue: neural networks are trained with light rays in such a way that the resulting network's skeletons can be interpreted as a meshing of the scene. Based on these neural meshes, the simulation of light transfer is accomplished.
The idea of a self-organizing meshing scheme is unorthodox, and, in fact, it presents some new ideas for solving the well-known principal hurdles in optimally approaching the radiosity challenge.