[ad_1]
In my app I’m producing for my deferred shading 4 layers of information to indicate up occluded elements of the scene throughout display screen area reflection (SSR) go. I want regular maps with bumpiness of those layers for lighting and I additionally want for my SSR to supply a traditional map for the entire scene from flat mesh floor. So I’m attempting to optimize the assets to have just one texture for the 2 regular maps or rebuild the traditional for flat surfaces from the depth buffer.
Storing two normals in a single RGBA texture is feasible by encoding Nxy solely and decode Nxyz within the pixel shader; Several strategies are proposed right here and I take advantage of the Lambert one claimed to work effectively with the 2 following capabilities:
float2 N3toN2(float3 N) // encode
{
float p = sqrt(N.z*8+8);
return N.xy/p+0.5f;
}
float3 N2toN3(float2 N ) //decode
{
float2 FE = N*4-2;
float F = FE.x*FE.x+FE.y*FE.y;
float t = sqrt(1-F*0.25f);
return float3(FE*t, 1-F*0.5);
}
I’ve checked their behaviours by writing these capabilities on the cpu they usually appears to work effectively together with with unfavourable indicators. But in my shader the result’s darker than the unique and in some instances completely black (see image A). Depending on the POV it may be blue solely. So there’s apparently an issue with the N.z part.
Regarding regular reconstruction from the depth buffer the tactic is defined right here and makes use of the “magic” of ddx/ddy capabilities (how it’s working defined right here). In the unique publish they rebuild the place in world area so I suppose the normals they use are in world area. I work in view area so I reconstruct the place PosV in view area and use it to get the flat floor regular SSNormal in view area I assume. From PosV and SSNormal you get the ray course SSDir in view area like this:
float3 PosV = float3(InvProj.x*(dI.x*4-1), InvProj.y*(1-dI.y*4), 1)/(InvProj.z*D+1);//compact format of the inverse projection matrix
//with dI = enter.texcoord*0.5 as a result of I take advantage of solely the higher quarter of the feel
//holding the 4 layers as the ultimate scene to replicate.
//So dI*4 is used to get PosV as an alternative of standard dI*2
float3 SSNormal = normalize(cross(ddx(PosV), ddy(PosV)));
float3 VSDir = normalize(replicate(normalize(PosV),SSNormal));
Here I’ve a weird behaviour that appears to rely upon the decision of the feel in comparison with the ultimate display screen dimension.
I’ve two strategies to generate the layers I want.
If for the 4 layers I want I take advantage of a single RTV texture that’s twice the dimensions of the display screen (e.g. 1920×1080 for the RTV and 960×540 for the display screen) I’ve a pleasant outcome (PanelB). If I set the display screen to 1920×1080 SSR disappear. Of observe the ultimate scene is recomposed throughout SSR by deciding on the suitable colour from the 4 layers based mostly on Zbuffer worth.
THe drawback appears to come back from the PosV calcul when the display screen dimension is the same as the feel dimension. After checking the SSDir.z if principally unfavourable and I’ve a take a look at to not do SSR for ray with unfavourable SSdir.z. After some attemps to vary this equation I’m nonetheless not profitable.
In a second methodology (Panel A) I’ve one RTV per layers, every RTV having the dimensions of the display screen. Here I’ve unfilled traces that seem within the SSR.
Of course all the things works effectively if I take advantage of an additional texture for storing the total flat or bump regular (XYZ) of the scene.
[ad_2]