c++ - Calculating position in view space from depth buffer texture in DirectX 11/HLSL -


i want reconstruct position in view space depth buffer texture. i've managed set depth buffer shader resource view shader , believe there's no problem it.

i used formula calculate pixel position in view space each pixel on screen:

texture2d textures[4]; //color, normal (in view space), depth buffer (as texture), random samplerstate objsamplerstate;  cbuffer cbperobject : register(b0) {     float4 notimportant;     float4 notimportant2;     float2 notimportant3;     float4x4 projectioninverted; };  float3 getposition(float2 texturecoordinates) {     //textures[2] stores depth buffer     float depth = textures[2].sample(objsamplerstate, texturecoordinates).r;     float3 screenpos = float3(texturecoordinates.xy* float2(2, -2) - float2(1, -1), 1 - depth);     float4 wpos = mul(float4(screenpos, 1.0f), projectioninverted);     wpos.xyz /= wpos.w;     return wpos.xyz; } 

, gives me wrong result:

enter image description here

i calculate inverted projection matrix way on cpu , pass pixel shader:

constantbuffer2dstructure cbperobj; directx::xmfloat4x4 projection = camera->getprojection(); directx::xmmatrix camprojection = xmloadfloat4x4(&projection); camprojection = xmmatrixtranspose(camprojection); directx::xmvector det; directx::xmmatrix projectioninverted = xmmatrixinverse(&det, camprojection); cbperobj.projectioninverted = projectioninverted; ... context->updatesubresource(constantbuffer, 0, null, &cbperobj, 0, 0); context->pssetconstantbuffers(0, 1, &constantbuffer); 

i know calculations vertex shader ok (so guess mycamera->getprojection() returns result):

directx::xmfloat4x4 view = mycamera->getview(); directx::xmmatrix camview = xmloadfloat4x4(&view); directx::xmfloat4x4 projection = mycamera->getprojection(); directx::xmmatrix camprojection = xmloadfloat4x4(&projection); directx::xmmatrix worldviewprojectionmatrix = objectworldmatrix * camview * camprojection;  constantsperobject.worldviewprojection = xmmatrixtranspose(worldviewprojectionmatrix); constantsperobject.world = xmmatrixtranspose(objectworldmatrix); constantsperobject.view = xmmatrixtranspose(camview); 

but maybe i've calculated inverted projection matrix in wrong way? or did make mistake?

edit

as @nicoschertler spotted, 1 - depth part in shader mistake. i've changed depth - 1 , made minor changes in textures' format etc. have such result now:

enter image description here

note different camera's angle (as don't have earlier 1 anymore). here's reference - normals in view space:

enter image description here

it looks somehow better, right? looks strange , not smooth. it's precision problem?

edit 2

as @nicoschertler suggested depth buffer in directx should use [0...1] range. i've changed depth - 1 depth have:

float depth = textures[2].sample(objsamplerstate, texturecoordinates).r; float3 screenpos = float3(texturecoordinates.xy* float2(2, -2) - float2(1, -1), depth);// -1); //<-- change float4 wpos = mul(float4(screenpos, 1.0f), projectioninverted); wpos.xyz /= wpos.w; return wpos.xyz; 

but got result:

enter image description here


Comments

Popular posts from this blog

dns - How To Use Custom Nameserver On Free Cloudflare? -

python - Pygame screen.blit not working -

c# - Web API response xml language -