Quantcast
Channel: Question and Answer » hlsl
Viewing all articles
Browse latest Browse all 69

Issues with depth calculation in HLSL shader

$
0
0

I’m currently trying to implement shadow maps in my graphics framework.
I ran into an issue with depth calculations I wasn’t able to solve myself (yet). I did a lot of testing and debugging and think I finally localized the problem.

To get the position in light-projection space I do this in the vertex shader:

float4 worldPosition = mul(input.position, worldMatrix)
output.lightPosition = mul(worldPosition, lightViewProjectionMatrix);

Note that I premultiplied the view and projection matrix on CPU.

To calculate the corresponding uv coordinates I’m doing the following in the pixel shader:

float2 shadowCoords;
shadowCoords.x =  input.lightPosition.x / input.lightPosition.w / 2.0f + 0.5f;
shadowCoords.y = -input.lightPosition.y / input.lightPosition.w / 2.0f + 0.5f;

This appears correct too, as I used the sampled value as surface colors and it went exactly as expected.

Now I want to calculate the depth (in relation to the light of course) of the current pixel. The corresponding code:

float depth = input.lightPosition.z / input.lightPosition.w;

(input.lightPosition is the float4 I output in the vertex shader as shown above)
However, this apparently gives wrong results. If I use this depth as surface color (disabling any lighting) I get
enter image description here,
which is obviously wrong. Note that the point light is right above the black line.

Now, if I use input.position (which is calculated by multiplying the camera viewprojection-matrix with the world position) to calculate the depth, I get correct results.

I figured this has to be caused by semantics. input.position is using SV_POSITION, while input.lightPosition used POSITION0 (I also tried TEXCOORD0 without success). I verified this by modifying the DepthmapShader and got different results when using SV_POSITION instead of POSITION0. If you need more information I’ll happily post them.


Now for the final question: What semantic do I have to use to properly calculate the depth? As SV_POSITION is already in use for the actual screen position I can’t use that. Or do I have some general missconception? I just can’t wrap my head around what’s happening and would be glad if someone could help me out.


For the sake of completeness, here’s my Depthmap-Shader:

struct VertexInputType {
    float4 position : POSITION;
};

struct PixelInputType {
    float4 position : SV_POSITION;
    float4 depthPosition : POSITION0;
};

PixelInputType DepthmapVertexShader(VertexInputType input)
{
    PixelInputType output;

    input.position.w = 1.0f;

    float4 worldPosition = mul(input.position, worldMatrix);
    output.position = mul(worldPosition, viewProjectionMatrix);
    output.depthPosition = output.position;

    return output;
}

float4 DepthmapPixelShader(PixelInputType input) : SV_TARGET
{
    float depth = input.depthPosition.z / input.depthPosition.w;
    float4 color = {depth, depth, depth, depth };
    return color;
}

Edit: I’m constructing the view-projection matrix like that (in response to Tim’s answer) (Snippet is located inside of the PointLight class):

XMVECTOR dir = XMLoadFloat3(&XMFLOAT3(0, 0, 1));
XMVECTOR up = XMLoadFloat3(&XMFLOAT3(0, 1, 0));
XMVECTOR pos = XMLoadFloat3(&Location);   

XMMATRIX proj = XMMatrixPerspectiveFovLH(XM_PIDIV2, 1.f, 0.1f, 1000.f);
XMMATRIX view = XMMatrixLookToLH(pos, dir, up);

return view*proj;

Is this valid? Or do I have to do something else? As Tim mentioned it projects on a plane. How should I setup the matrix (and how would I write the depth information to texture for the case that this procedure changes?)? (Bonus question: Is there a way to do a 360° shadow map with a single matrix?)


Viewing all articles
Browse latest Browse all 69

Trending Articles