## Screen Space Grid

Rendering terrains or ocean is usually done by displacing vertices of a grid defined in world space. Depending on the resolution of the grid, this might create aliasing artefacts when several triangles project onto the same pixel and can be computationally expensive. This is why Level of Details algorithms are used. Those methods consist in adapting the resolution of the grid to its distance to the camera.

Another solution would be to define the grid in screen space and then to back project it onto an infinite plane. With such techniques, the grid resolution on screen in constant and the change in resolution in world space is continuous (no hard transition between different resolutions). Grid defined in screen space and back projected on a plane. Image from 

This algorithm works by

• Creating a grid orthogonal to the viewer in screen space.
• Project it on a plan in world space
• Displace the points according to a height field.
• Render the projected plane

In its naive form, this algorithm fails for each point of the far plane of the frustum that does not intersect the plane. When the camera looks above the horizon, part of the frustum points will never intersect the plane: the intersection points will computed behind the camera. Image from Furthermore, it is important that the camera is sufficiently high upon the plane to prevent the points after displacement to appear below it Those are heavy constraints on the camera moves.

First, we create a grid in screen space. Therefore, we generate a grid of vertices with 3D coordinates ranging from [-1, -1, 0] to [1, 1, 0]. If you visualize them with

```#version 330

layout (location=0) in vec3 position;

void main()
{
gl_Position = vec4( position.x, position.y, 0, 1 );
}

uniform sampler2D tex;
uniform int line;

out vec4 color;

in vec3 test;

void main()
{
color = vec4(0, 0, 0, 1);
}

You get something like this (in wireframe) Then, each of those points has to be projected onto a 2D plane. Here, we want to find the point P, projection of p on the world plane. To do so, we will use a simple Pythagore. The triangle made by p, P and h is a squared triangle. the distance between h and p is the height of the camera. The vector from p to P is the world direction coming from the camera through the vertex p. Then, to  get P, all we need is the distance t and to solve

P = p + world direction * t

Pythagore tells us that

t = cos(α) / camera_height

with cos(α) = dot(up_vector, world_dir).

This leads to the code ( my up vector is vec3(0, -1, 0) ).

```vec3 toWorldPos(vec3 posScreen)
{
vec4 vertex = vec4(posScreen, 1);

vec3 camera_dir =
normalize( ( inverse(projection) * vertex ).xyz );
vec3 world_dir =
normalize( inverse(view) * vec4(camera_dir, 0) ).xyz;

float t = camera_position.y / -world_dir.y;

return camera_position + t*world_dir;
}```

And now , you can displace the computed world position with a heightmap and render those vertices with the usual Model View Projection transforms. Here, we use as a heightmap ```#version 330

uniform mat4 projection;
uniform mat4 view;
uniform vec3 camera_position;

uniform sampler2D tex;

layout (location=0) in vec3 position;

out vec3 world_pos;

void main()
{
world_pos = toWordPos( vec3(position.x, 0, position.y) );

float height = texture2D(tex, world_pos.xz).r;
test.y = height;

gl_Position = projection * view * vec4(world_pos, 1);

}

out vec4 color;

in vec3 world_pos;

void main()
{
vec3 normal = normalize(cross(
dFdx(world_pos.xyz),
dFdy(world_pos.xyz)));
vec3 light = vec3(0, 1, 1);

vec3 k = vec3(1, 1, 1);
color = vec4(1, 0, 0, 0) *
dot(normalize(-light), normalize(normal));
}  huuuum this wireframe is a bit fishy ain’t it ?

Actually, this method looks nice and easy but it has a major flaw; when there is no intersection between the world direction and the plane is completely undefined. Therefore, if you have a camera looking along the z axis, half the screen is not intersecting with the plane.

A solution to this problem is to project the points in a geometry shader. This way, we will be able to discard the triangles that are not in front of the camera.

```\$start VERTEX_SHADER

layout (location=0) in vec3 position;

out vec3 vertex_position;

void main()
{
vertex_position = position;
gl_Position = vec4(position, 1);

}

layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;

uniform mat4 projection;
uniform mat4 view;
uniform vec3 camera_position;

uniform sampler2D tex;

in vec3 vertex_position[];

out vec3 geometry_position;

bool toWorldPos(vec3 posScreen, out vec3 pos)
{
vec4 vertex = vec4(posScreen, 1);

vec3 camera_dir =
normalize( ( inverse(projection) * vertex ).xyz );
vec3 world_dir =
(inverse(view) * vec4(camera_dir, 0)).xyz;

if (world_dir.y == 0)
return false;

float t = camera_position.y / -world_dir.y;

if (t < 0)
return false;

pos = camera_position + t*world_dir;
pos.y = 0;

return true;
}

void main() {

float fact = 0.01;

for(int i=0; i<3; i++)
{
if(toWorldPos(vertex_position[i], geometry_position))
{
vec4 disp = texture(tex, geometry_position.xz);
geometry_position = vec3(geometry_position.x,
disp.x,
geometry_position.z);

gl_Position =
projection * view * vec4(geometry_position, 1);

EmitVertex();
}
}

EndPrimitive();
}

Leading to this result Until there everything seems fine … then why is the plane so ugly ???

Actually it is once again a matter of filtering. For opengl, the grid is right in front of the camera. Therefore, it always accesses the texture to its highest level of detail. Quickly, the grid resolution is not enough to sample the texture and aliasing occur.

This is the same object, rendered using a 50×50 and a 150×140 screen grid.

If you want to access the correct LoD, you need to use the function

This function changes the way opengl selects its LoD.For detailed information about opengl LoD selection process, see the post OpenGL Texture Access.

Here, it requires computing the derivative between the projection of one point of the projective grid and the projections of its neighbours on X and Y.

```vec3 position_x;
vec3 position_y;

toWorldPos( vertex_position[i] + vec3(1.0/150.0, 0, 0),
position_x);
toWorldPos(vertex_position[i] + vec3(0, 1.0/150.0, 0),
position_y);

vec2 dudx = position_x.xz - geometry_position.xz;
vec2 dudy = position_y.xz - geometry_position.xz;

//We multiply by 0.1 to have less texture tiles
vec4 disp = textureGrad(tex,
geometry_position.xz*0.1,
dudx*0.1, dudy*0.1);```

Which gives us as a final rendering J. D. Mahsman, Projective grid mapping for planetary terrain. University of Nevada, Reno, 2010.
 “Real-time water rendering – Introducing the projected grid concept – Habib’s Water Shader.” http://habib.wikidot.com/projected-grid-ocean-shader-full-html-version
Publicités