Filtering 101

Filtering ? Kesako ?

The process of rendering is based strongly on sampling, as an example, the scene to be rendered is sampled by the pixels of the final image. This implies that a correct rendering must respect the Nyquist limit; that the frequency of details in the scene must be 2 times lower than the sampling done by the pixels. Otherwise aliasing artefacts appear.

This aliasing can come from several reasons. It could be induced by the geometry if several triangles project onto the same pixel. This aliasing can also be induced by the spatial maps (textures, normal maps, horizon maps …) applied on the surface. To avoid this, GPU use Mipmapping, but this is not suitable in many cases.

The color of a pixel p is ideally the mean of all the visible contributions of the spatial maps in this pixel. This is DIFFERENT than the color of the visible mean contributions of the spatial maps.

The first teapot show the normal map at the original mipmapping level. The second shows the enlightenment obtained using just the mean normal obtained from mipmapping. The last teapot takes into account that several normals contributes to the final enlightenment of a point. Image taken from [1]
This becomes obvious with normal maps. Applying a normal map on an object, when seeing the object closely, the normal map is sampled correctly (first teapot). When seen from afar, if the normals are just mipmapped, the object becomes progressively smooth as solely the mean normal contributes to the enlightenment (second teapot). Instead, we would like to keep the information in each point that there is in fact a wide number of different normals contributing to the enlightenment, not only the mean one (third teapot).

This is the objective when filtering data. The point is the remove aliasing artefacts by finding new representations for the objects. We want to be able with a simple request (like a texel fetch or a single function evaluation) to retrieve all the information about the surface that projects into the pixel (in the case of normal maps, we want both a mean normal and the degree of roughness of the original surface).

Mathematical Background

This section is strongly inspired from Bruneton & Neyret [2]. I will try to detail the mathematical intuitions behind all this formalism.

Rendering is a integration problem. Integrating for each point over all the incoming light direction or integrating in a pixel over the whole patch of surface that projects onto it. Here, we will focus on this second integration.

filtering_pbm
Here, we render an object S. This object is represented in memory with a coarse geometry (A) and several spatial maps. We want to compute the value I of the pixel onto which A projects. This is the integral over S of the local illumination equation (in our case, this equations returns the color of a pixel depending on the light direction, camera direction and the spatial maps). Picture from [2].
To solve an integral on a computer, several methods exists that does numerical integration. They usually sum all the samples of the function over the integration domain. This is quite slow and returns just an approximation of the integral value. Therefore, we would rather to be able to solve our integral analytically; expressing it with a function g that can be evaluated on the computer.

In our case, we rely on spatial maps. Thus, our function will be parametrized by those maps at different level of resolutions, depending on S. Once we found our function g, we will use textures to store its parameters and mipmapping to get them at the right resolution. This implies that all those parameters must also combine linearly (that the averaging done by the mipmapping process gives a correct result).

For example, if we put a function f parametrized by a map M. The color of the pixel I is computed with

i_int_f
We want to re express the values in M to get a new map M’  such as

m_p_m

int_f_g

This leaves the question of the function g. Most of the time, g is based on the probability distribution of the M values over S. Often, it is a Gaussian function, parametrized only by its mean and its variance. The mean value combines linearly and can be stored in a texture. The variance does not combine linearly. Instead, we use the moments of the function and exploit the fact that the variance can be compute from the mean value and the mean square value. It is then stored in a second texture, containing the squared values of the function.

Practice

It might be easier to understand all this with a concrete practical case

We have a texture with gaussianly distributed values (we can check this by looking at the histogram of the image to see if it looks like a gaussian )

noise_tile

Applying this texture on an infinite plane, we want to color every point above a value t with a color Ch and all other points with a color Cl.

The rendering equation therefore becomes

step0

Where H is the heaviside function (H(x) = 0 if x < 0, 1 otherwise), h(x) is the height at a point x and S is the surface patch. We keep going on with the math, seperating the integrals from Ch and Cl that are constants.step1

Now, we change the integrals. Instead of integrating over S, we integrate over D, the repartition domain of the heights. Instead of summing up all the heights into the pixel, we sum up all the possible heights, weighted by the probability they appear in the pixel.

step2

NDF(S, h) is the probability that h is present on the patch S.

Then, by using the fact that H(x) = 0 when x < 0, we reduce the integration domain. As H(x) = 1 when x > 0, we can also remove it from the formula (we suppose that D = [0, 1])

step3

Which is equal to

step4

Where CDF_h(S, t) is the cumulative distribution function of h over the patch S until the value t (the CDF of a probability distribution function is by definition, an analytical function of its integrand).

All those maths means that to compute the filtered color of our pixel, we need to be able to evaluate for every S the CDF of the distribution function. Here, our heights are gaussianly distributed. Thus, the CDF is the CDF of a gaussian function

gaussian_cdf

So, to filter our pixel, we only need the be able to get the mu and sigma in each pixel. And this can be done with textures and mipmapping.

Loading the noise tile as a texture and mipmapping already gives us mu. The only trick is to get the sigma value. To get it, we use this property

variance_esperancewhere Var(x) = sigma and E[X] is the expected value of x (here, it is its mean, and therefore it is the value computed by the mipmapping process). E[X] is the mu value given by the texture containing the tile, and E[X²] can be obtained by sampling a second texture containing the squared noise values.

In code terms it means


float t = 0.47;

float height = texture(heightmap, tex_coord).r;
float height2 = texture(heightmap_squared, tex_coord).r;
	
float mu = height;
float sigma2 = max(1e-4, height2 - height*height);
		
float max_cdf = 0.5 * ( 1 +  erf((1 -mu)/sqrt(2.0 * sigma2)));
float cdf = 0.5 * (1 + erf((t -mu)/sqrt(2.0 * sigma2)));
	
color = vec4( (max_cdf - cdf) * vec3(0.7, 0.7, 0.7) + cdf * vec3(0, 0.5, 0), 1);

And in picture terms

filtered_no
Without filtering
filtered
With filtering

BIBLIO

[1] M. Olano and D. Baker, “LEAN mapping,” in Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games, 2010, pp. 181–188.

[2] E. Bruneton and F. Neyret, “A survey of nonlinear prefiltering methods for efficient and accurate surface shading,” Visualization and Computer Graphics, IEEE Transactions on, vol. 18, no. 2, pp. 242–260, 2012.

Publicités

Répondre

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l'aide de votre compte WordPress.com. Déconnexion /  Changer )

Photo Google

Vous commentez à l'aide de votre compte Google. Déconnexion /  Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion /  Changer )

Connexion à %s