Добрый день. Я занимаюсь web-разработкой и почти всем, что с этим связано.

Настройки визуализации

Введение

На этой странице содержится обзор внутренней работы движка визуализации. Эта информация может быть полезна, хоть и не обязательна.

Чтобы визуализация была максимально реалистичной, необходимо воспроизвести поведение света. К счастью, свет довольно хорошо изучен — есть множество формул, описывающих свет. А LuxRender как раз и занимается расчётами по этим формулам.

LuxRender тщательно рассчитывает значения освещённости для огромного количества точек на поверхности камеры. Принцип поиска значения освещённости для определённой точки может быть изображён следующим образом:

схематичный вид визуализации: 1 — поверхность камеры, 2 — камера, 3 — геометрия сцены, 4 — источник света, 5 — путь

Первым шагом является определение точек на поверхности камеры, для которых должна быть рассчитана интенсивность света.

Как только точка выбрана, интегратор поверхности создаёт луч между источником света и поверхностью камеры. Луч может быть как прямым, но обычно он является ломаной линией из-за отражений от поверхностей, встречающихся на пути к камере.

Расчёт отражений луча от поверхностей объектов осложнён тем, что обычно свет рассеивается в различных направлениях при отражении от поверхностей. Вместо разделения светового пучка выбирается одно направление, зависящее от свойств материала поверхности. На основании яркости источника света и свойств материала интегратор рассчитывает итоговую интенсивность света на поверхности камеры.

После интегратора поверхности в дело вступает интегратор среды, рассчитывающий эффект для, например, дыма, а затем регулирующий рассчитанную интенсивность света для этого эффекта. This results in the final light intensity on the desired point of the camera surface, to which we will refer as sample.

After a number of samples has been calculated, an image needs to be generated. The filter decides to which pixels a calculated sample contributes. The tone mapping process converts calculated light intensities to colour values of pixels.

Renderer

The renderer is a "container" of sorts for the system outlined above. Each renderer contains a different set of surface integrators. The different renderers are to accomadate differences in how the surface integrators go about their work, as follows:

Sampler

The sampler renderer is the "classic" LuxRender, it contains all the surface integrators present in versions prior to 0.8. It is a standard, CPU-based raytracer.

hybrid sampler

Hybrid sampler is a modified form of "sampler" that supports GPU-acceleration via OpenCL and the LuxRays Library. Hybrid sampler will use your computer's graphics card to handle calculating the ray's actual flight through the scene, freeing your CPU to handle things such as the filter and sampler. The available surface integrators, bidirectional and path, have the same settings as it counterpart in the regular "sampler" renderer, with the exception of light strategies. Path supports the "all", "auto", and "one" strategies. Bidirectional only accepts "one". For hybrid bidirectional, you must explicitly declare your light strategy as "one". If you can't find this control in your exporter, try enabling advanced parameters. Hybrid path accepts the parser-default "auto", so explictly declaring a light strategy is not necessary for it.

sppm

SPPM is an experimental stochastic progressive photon mapping integrator. It perform a series of photon mapping passes, refining the image progressively. See the SPPM page for more info

Sampler

A critical part of the above process is how to pick the location of a sample and which direction to select when reflecting a ray. These, and many other similar decisions, are based on "random" numbers. The sampler is responsible for generating all the numbers needed for a sample. A good sampler will distribute the numbers evenly (both within each sample and between samples), while avoid forming predictable patterns.

LuxRender's samplers can be divided into two categories: "dumb" and "intelligent". The "dumb" ones will generate samples, that is, pick locations and directions etc, without looking at the resulting sample values. That is, there's no feedback. The "intelligent" ones generate samples based on the results of the previous samples.

As "dumb" samplers don't analyse the values of calculated samples, they are a bit faster than the intelligent ones. However, except for very simple scenes this advantage gets lost as the rendering itself will be less efficient. This is especially noticeable with many light sources or specular materials and caustics. On the other hand, due to their non-adaptive nature, "dumb" samplers usually behave in a more predictive manner.

Therefore, for most regular scenes using an "intelligent" sampler is recommended. The "dumb" samplers are recommended for animations (where their predictive behavior is needed), really simple scenes or quick previews. Also, the metropolis sampler does not help very much with the exphotonmap integrator, usually low discrepancy is the best choice when using exphotonmap.

metropolis

The metropolis sampler is an "intelligent" sampler that uses the Metropolis-Hastings algorithm and implements Metropolis Light Transport (MLT). The metropolis sampler tries to "seek the light". This makes it a good choice in almost all situations. It does this by making small random changes to an initial reference sample and looks to see if the new sample is more interesting, I.E. provides more light. If it is not, then the sample is discarded and a new sample is taken. If the new sample is a nice bright sample, metropolis will adopt the sample as the new reference sample. Then it explores the surrounding area using very small path mutations. This process of changing a sample is called a small mutation. This behavior can also lead to fireflies (overly bright spots in the image).

This process of mutation allows the metropolis to efficiently locate and explore paths which are important. However in order to avoid the sampler getting stuck on some small but very bright area it will once in a while generate a completely random sample and force this to become the new reference sample. This is called a "large mutation".

The "maxconsecrejects" (Maximum Consecutive Rejects) parameter controls when to generate a path mutation. The default value is 512, so if 512 samples are discarded it generates a new path mutation (time to look somewhere else). Then the "lmprob" (Large Mutation Probability) parameter is used to determine the chances of generating a large path mutation (new completely random sample from somewhere else in the image) or a small path mutation (something nearby). Before a sample is added to the film, the metropolis sampler decides if the sample should be accepted as the new base for mutations or rejected (the previous sample is then used instead). Lowering the maxconsecrejects parameter introduces bias and mutes light sources and caustics. For caustics higher values are better. Raising the lmprob value also introduces bias, setting it to 1 turns metropolis into a dumb random sampler. Lower values are less biased and will produce a more realistic result. Be careful not to bring this too low or it may introduce undesirable effects.

LuxRender's metropolis sampler is based on Kelemen's paper "A Simple and Robust Mutation Strategy for the Metropolis Light Transport".

lowdiscrepancy

The lowdiscrepancy sampler is a "dumb" sampler that is constructed to distribute the samples evenly, though still in a random fashion. It is currently the fastest "dumb" sampler (in terms of convergence speed) and is the best option for most users who want to use a "dumb" sampler. It uses [0,2] quasi random sequences for all parts of the engine.

Within the lowdiscreapancy sampler there are various pixel samplers, which controls the order in which the pixels are sampled. The default is "Vegas" but "hilbert" is very popular and the default for animation settings in luxblend. Valid values for pixelsampler are hilbert, linear, vegas, lowdiscrepancy, tile, and random. Vegas, lowdiscrepancy, and random all take samples from random points around the image then select new random points to sample. Hilbert, linear, and tile all take the complete number of samples before moving on to the next section, in other words your image will not be fully filled covered right away. The 'lowdiscrepancy' pixel sampler is recommended for progressive rendering, 'hilbert', 'linerar', and 'tile' is recommended for tiled rendering.

Parameters

random

The random sampler is the simplest sampler. It generates completely (pseudo-)random sample positions, resulting in low convergence speed. This sampler is only intended for testing and analysis purposes by developers.

Parameters

erpt

The ERPT (Energy Redistribution Path Tracing) sampler is similar to the metropolis sampler and is based on an Energy Redistribution scheme. It mutates samples which show good contribution, but instead of randomly walking over samples returned, it keeps a pool of image space samples. These image space samples, called chains, are mutated a number of times before the pool is updated.

noise-aware and user-driven sampling

LuxRender's samplers (except for erpt) have an additional feature to help them "focus fire" in order to refine a render. This function works by applying adding an additional channel the framebuffer that acts as a "heat map" of where the sampler should add rays. There are two ways this map can be defined, noise-aware sampling and a user-defined map (either a pre-loaded one, or with the refine area tool).

noise-aware sampling

The first method for generating the sampling map is to let LuxRender attempt to generate one itself based on the noisy regions of the image. This can be enabled with the "noise aware" option on the sampler. At a predetermined interval, LuxRender will evaluate the perceptual noise level in the rendered image. The map that is generated here will cause LuxRender to focus its samples on the areas it sees as being more noisy.

Note that noise-aware sampling may not work as well with the bidirectional integrator, as it has no control over bidir's "light tracing" side. Noise that comes from variance in the light path (such as caustics or areas that are only lit indirectly) may not clear even if focused on.

user driven sampling

LuxRender also allows the user to define a sampling map themselves. You can pre-load one by supplying an OpenEXR file (of the same dimensions as the render) for the "user sampling map" parameter. Additionally, the LuxRender GUI has a tool for painting a map over your render as it runs, found in the "refine area" tab. User-driven sampling does not require the "use noise aware" option to be enabled, but if both are enabled, a combination of the noise-aware map and the user-set map are used.

Note that neither map replaces the normal behavior of the sampler entirely. For example, the metropolis sampler with noise-aware enabled will "steer" using a combination of a pixel's brightness and noise level.

Surface Integrator

Surface integrators are central to the rendering process; they construct paths between light sources and the camera surface and calculate the incoming light intensity. The choice of the best integrator depends on the type of scene that needs to be rendered - for example, interiors usually benefit from a different integrator than exteriors.

bidirectional

The bidirectional integrator works by tracing rays both from the light towards the camera ("light path") and from the camera towards the light ("eye path"), hence the name. After it has generated a path in each direction, it will form new paths by trying all possible connections between the two original paths. In other words, it looks for a place where an eye path hit something within a line of sight of a light path hit point. This means that it is able to overcome the major problem with regular path tracing: finding the light sources.

The bidirectional integrator is unbiased, and considers all types of light interactions. It is suitable for interior renderings and other scenes with "difficult" lighting.

The bidirectional integrator a good default choice if you are unsure which integrator is appropriate.

Bidirectional integrator schematic

Parameters

path

The path integrator uses standard path tracing. It will shoot rays from the eye (camera) into the scene, and will continue reflecting or refracting the ray off objects until it finds a light or the search is terminated. The like the bidirectional integrator, path considers all kinds of reflections, not just specular ones.

The path integrator is unbiased, and suitable for exterior renderings and reference renders. It has trouble dealing with complex lighting found in many interiors, and as a result is usually slower than bidirectional for interior renderings, and even some exterior renderings.

Parameters

ex photon map

A spectral photon mapping integrator. In the first pass, it will cast rays from the light source and generate a photon map. In a second pass, it will render the map with direct lighting or path tracing.

It's a good speed/quality rendering method, and is recommended if bidirectional is too slow for a particular job. (Such as animations, where the long, unpredictable render times of bidirectional are problematic) For more information, see Intro to ExPhotonMap.

Parameters

directlighting

The directlighting integrator only covers light that shines on a surface directly (or via mirror and glass surfaces) - diffuse or glossy reflection between surfaces is ignored. Hence, the resulting image will not be very realistic.

This integrator constitutes the "classic" raytracing algorithm (Whitted) and is very fast, but only suitable for quick previews.

Parameters

sppm

Experimental stochastic progressive photon mapping integrator. It will perform a series of passes where it first shoots rays from the camera, stores their positions (called "hitpoints"), then fires rays from the lights to see which hitpoints are illuminated. This process is repeated to progressively refine the image. For more information, see the SPPM page.

Parameters

distributed path

The distributed path tracer is an extension of the regular path tracer. Instead of selecting a single reflection direction, it will select multiple directions and spawn additional rays along each direction. The number of rays to spawn is configurable per material type (diffuse, specular and glossy). It also features noise rejection techniques, such as discarding very bright sample values (which could lead to very bright pixels). However, due to the number of parameters, it can be quite difficult to adjust properly.

The distributed integrator is mainly meant to be used with animations, where noise control and predictable rendering times is essential. While it is unbiased in theory, typical parameter settings will cause it to be fairly biased.

For an in-depth description of its parameters, see this page: http://www.luxrender.net/wiki/Distributed_Path

instant global illumination (igi)

Experimental "Instant Global Illumination" integrator. It will automatically place "virtual point lights" at places it thinks should have indirect illumination. It is somewhat like an automatic version of using extra lamps to fake global illumniation, then rendering with classic raytracing.

Light Strategy

The light sampling strategy determines show LuxRender decides which lights should be checked with "shadow rays" (a test ray fired at a lamp to see if the point the current ray hit is illuminated by said lamp). The bidirectional integrator uses this option for the eye path, and has another control to set the strategy for the light path.

Log Power — модифицированная версия Power, использующая алгоритм мощности лампы.

Volume Integrator

The volume integrator handles calculation of light paths through volumes. The best choice will depend on the contents of your scene.

multi

The multi volume integrator allows a ray to scatter as many times as it needs to until the ray is terminated by the surface integrator. This behavior can be slow, but is necessary for heavy-scattering effects such as SSS. If you aren't sure which volume integrator to use, you should use "multi" unless you have attatched a "homogeneous" volume to your lights, in which case you should use "single".

single

The single volume integrator allows only single scattering for volume calculations. This means a ray will scatter once in a given volume, and then no more. This is a useful shortcut for atmospheric effects, since these are normally lightly-scattering volumes that cover the entire scene, and can be very slow to calculate with multi.

emission

Emission is the simplest volume integrator, it will calculate only absorption and emission. It does not calculate scattering. If you are using the "homogeneous" medium, you will need to use a different volume integrator to get results from it.

Filter

While calculating light samples, LuxRender treats the camera surface as a continuous surface - it does not yet take the amount of pixels of the final image into account. Once a certain number of samples has been calculated, it is necessary to decide for all samples to which pixel on the rendering they contribute. This step is executed by the filter.

Typically, a sample contributes to multiple pixels, with most of it contribution being added to the pixel on which it is located and smaller amounts going to neighbouring pixels. Different filters differ in the exact distribution of a sample's light contribution and furthermore filters have a setting that defines the size of the total area over which the sample's contribution is spread.

Сравнение фильтров: sinc, Mitchell, box и Gaussian
Choosing the right filter influences the sharpness and smoothness of the rendering, although the difference between various filters is subtle. Differences in rendering time are negligible. Using the Mitchell filter with default settings is generally a good and safe choice. However since it has negative lobes (the filter contains negative values), it may produce artifacts if the scene contains small but strong reflections light sources. In this case the gaussian filter may be a better choice.

 

Сравнение фильтров: sinc, Mitchell, box и Gaussian
The dark edge around the spectral reflection of the area light in the scene is caused by the negative lobes of the Mitchell filter.

Gaussian

Gaussian filter
Gaussian filter
Gaussian filter

The box filter gives quite noisy and unsharp result and is therefore not recommended for general use.

sinc
sinc filter
The pair of dark and bright rings around the spectral reflection of the area light in the scene is caused by the negative and positive lobes of the sinc filter.
triangle
triangle filter
Accelerator

The accelerator is used to figure out which objects do not need to be taken into account for the calculation of a ray. It is a way of "compiling" the scene into a format that can be rendered faster.

QBVH

This accelerator is a modified bounding volume hierarchy accelerator that has 4 children per node instead of two and uses SSE instructions to traverse the tree. It uses much less memory than a kd-tree while providing an equivalent or better speed. In LuxRender, QBVH has much better SSE-optimization than KD-Tree, and as a result will be faster in almost all cases.

If you aren't sure which accelerator to use, QBVH is probably the best choice.

SQBVH

Varient of QBVH with spatial-split support. SQBVH offers better performance, but uses more memory and takes longer to build. It has the same parameters as QBVH

kd-tree
A 3-dimensional kd-tree. The first split (red) cuts the root cell (white) into two subcells, each of which is then split (green) into two subcells. Finally, each of those four is split (blue) into two subcells. Since there is no more splitting, the final eight are called leaf cells. The yellow spheres represent the tree vertices.

Also known as "tabreckdtree", this accelerator is fairly fast, but is more memory-hungry than QBVH and not as well SSE-optimized.

A kd-tree uses only splitting planes that are perpendicular to one of the coordinate system axes. This differs from BSP trees, in which arbitrary splitting planes can be used. In addition, in the typical definition every node of a kd-tree, from the root to the leaves, stores a point.[1] This differs from BSP trees, in which leaves are typically the only nodes that contain points (or other geometric primitives). As a consequence, each splitting plane must go through one of the points in the kd-tree. kd-tries are a variant that store data only in leaf nodes. It is worth noting that in an alternative definition of kd-tree the points are stored in its leaf nodes only, although each splitting plane still goes through one of the points.

none

It is possible to not use an accelerator, and simply brute-force the scene. This is not recommended in actual production use.

Speeding Up LuxRender

One of the biggest mistakes new users make with any global illumination renderer is trying to rely too much on indirect light. While Lux can solve most indirect lighting situations eventually, direct light is always faster. A good habit to get into is to use the direct lighting integrator when setting up your scene lighting. As a general rule of thumb, if your scene lighting looks good without global illumination, it will look great and render quickly when you switch to a GI-capable integrator (such as bidirectional).


There are also some steps you can take to optimize the scene to make it a bit faster:

First, keep reflectance (i.e. brightness) in diffuse components of your materials below 0.8 or so. This will allow a ray to "use up" its energy so Lux can be done with it sooner, and it will also help your scene to de-noise faster. On the same note, avoid using specular colors higher than .25 (or much lower, 0.02-0.05 is a good range for most everyday objects). Reflection color on metal materials should be kept below .8 as well. If this makes your scene too dim, simply adjust the tonemapping to expose it more.

Second, limit the number of faces on objects that are used as meshlights. Each face in a mesh light is a light itself that must be sampled, so keep them as simple as possible. If you have a densely-tesselated, but dim meshlight, you can also use the "power" or "importance" light strategies to avoid sampling the large number of faces. This will remove the performance hit of a dense meshlight, but can produce strange results if the light significantly contributes to the scene illumination. This trick works best for dimly-glowing objects, such as indicator LEDs, bioluminescent creatures, and so forth.

Third, homogenous volumes are much slower than "clear" type volumes or no volumetrics at all, so SSS and atmospheric effects should be use sparingly unless you are ready for a very long render. Also, consider using the "single" volume integrator when using atmospheric scattering.

Fourth, procedural textures and microdisplacement add calculations every time a ray intersects them, so don't get carried away with them.

Finally, if you aren't sure what settings to use, use metropolis sampler, bidirectional path tracing, the mitchell filter, and the QBVH accelerator. This should give a clean, artifact-free image with reasonable speed.

If you are doing test renders, lowering resolution, rendering only a portion of the frame, or using a simpler surface integrator (such as direct) can give useful results with much less waiting.

Russian Roulette Explained

By default, LuxRender uses a technique called Russian Roulette (RR). The Russian Roulette technique is a way to reduce the average depth of a ray (number of bounces) in an unbiased way. Usually the main contributions happen at the first few bounces of a ray, so going all the way to 20 bounces will usually not contribute significantly, but will take 4x the time of just 5 bounces.

The Russian Roulette technique comes in two modes: probability and efficiency. The former uses a fixed probability to terminate a path for each bounce. The default however is efficiency, which takes into consideration how much light there is to be gained by going one step further. This usually reduces noise a lot better than the probability mode, however it is then material dependent. If the material is entirely white, then there's a 100% probability of it continuing to bounce, since the material will reflect all the light from that bounce and thus that extra bounce will contribute 100%.

So, if you're using Russian Roulette in efficiency mode, which is the default, then the darker the material, the shorter the average depth, and thus the quicker it can start on a new sample.

In addition, the lower average depth helps to prevent fireflies since low probability events that are accepted will have to be scaled up to compensate for all the other low probability events that wasn't accepted.

Комментарии