Quantcast
Channel: Question and Answer » hlsl
Viewing all 69 articles
Browse latest View live

XNA 4.0 – Strange edges with multilight shader

$
0
0

I am generating a light, a depth and a normalmap to calculate the lightning at each pixel with multiple lights.

On both rendertargets, i set the preferredMultiSampleCount parameter to 16 samples because I want smooth edges.
It seems to work well on the first picture (normalmap).
In the second picture (lightmap), multisampling works great, too.
But somehow there are flickering edges at the end of each model.

I think it may have to do something with the depthmap because due to using Surfaceformat.Single I have to use

minfilter = point;
magfilter = point;
mipfilter = point;

for the Texture filter and I think AA doesnt work with filter type point.

Is this a known newbie problem or should I upload the source of my shader files for a better understanding of my problem?

NormalTarget2D

LightTarget2D

EDIT:
If I turn multisampling off, I dont get these edges:
WITHOUT
(Take a look at the bottom of the square in the middle.. )


HLSL Compile Error: maximum vs_4_0_level_9_3 sampler register index (0) exceeded – note that the target doesn't support texture sampling intrinsics

$
0
0

I’m trying to convert a project from XNA to SharpDx and are updating my shaders to version 4_0. Now I’m getting an error of which I can find now information:

maximum vs_4_0_level_9_3 sampler register index (0) exceeded – note
that the target doesn’t support texture sampling intrinsics

The error is indicated on the first line below:

SamplerState MySampler2   <--here
{
    Fil = MIN_MAG_MIP_POINT;
    AddressU = Wrap;
    AddressV = Wrap;
};

The error only happens when I call the function below from the vertex shader. If I comment out the call, then the shader compiles.

float DoDispMapping(float2 texC0, float2 texC1)
{
    return WaveDispMap0.SampleLevel(MySampler2, texC0, 0).r;
}

Does anybody know what’s going on?

PS. I have tried to do exactly the same thing in a much simpler shader and it worked.

Can't read .cso files but I can read their .hlsl versions?

$
0
0

Well I’ve been trying to read a .cso file to use as a shader for a DirectX program I’m currently making.

Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it’s only the .cso files I can’t read.

I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files.

What I’m currently doing is:

vector<byte> Read(string File){
    vector<byte> Text;
    fstream file(File, ios::in | ios::ate | ios::binary);

    if(file.is_open()){
        Text.resize(file.tellg());
        file.seekg(0 , ios::beg);
        file.read(reinterpret_cast<char*>(&Text[0]), Text.size());
        file.close();
    }        

    return Text;
};

If I then implement it.

Read("VertexShader.hlsl"); //Works
Read("VertexShader.cso"); //Doesn't Works?!?!

And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong.

(I’ve also asked this at stack overflow but still no answers.)

Can't sample texture in HLSL using DX11

$
0
0

Environment:

  • Windows 7 x64
  • Visual Studo 2012
  • DirextX11
  • HLSL Shader Model 5
  • Ogre 1.9

Okay, so I’m trying to sample a texture in my pixel shader but I’m coming across some strange problems. Here’s the pixel shader:

struct PS_INPUT {
    float4 pos : SV_POSITION;
    float2 uv  : TEXCOORD0;
};

Texture2D tex0;
SamplerState s0;

float4 pixel_shader(PS_INPUT input) : SV_TARGET {
    float3 texSample = tex0.Sample(s0, input.uv);

    float4 color = float4(1,1,1,1);

    color.xyz = texSample;

    return color;
}

If you want / need it here’s the ogre .material definition:

material simpleMat
{
    technique
    {
        pass simpleMat
        {
            vertex_program_ref vertex_shader {}
            fragment_program_ref pixel_shader {}
        }

        texture_unit 0
        {
            texture diffuse.png 2d
            tex_address_mode wrap
        }
    }
}

This produces mostly black on my object, but I get 2 random white pixels. But here’s the strangest part. When I debug the pixel shader in visual studio texSample nearly always equals zero. So I decided to try something crazy, I debugged the program, got the value of input.uv and then modified the shader to use these values explicitly. When I do this texSample actually contains the correct color value. So yeah… any help is much appreciated!

Rojuinex

EDIT

Here is a snapshot that shows the issue in detail:

http://www.pasteall.org/pic/show.php?id=74835

EDIT 8/4/2014

Upon further investigation I have discovered that the further the object is from the camera, the darker the texture becomes. If the camera is right up against the object, the texture has the right color values, but as it moves away the sampler returns darker and darker values until they are black.

XNA 4.0 HLSL – strange depth map

$
0
0

I want to draw my pre-rendered depth map to the scene.

I get my depth value in the following way:

basically (Pixel Shader)

// Depth is stored as distance from camera / far plane distance, 1-d for more precision
output.Depth = float4(1-(input.Depth.x / input.Depth.y),0,0,1);

where (Vertex Shader)

output.Position = mul(input.Position, worldViewProjection);
output.Depth.xy = output.Position.zw;

(input = output of Vertex Shader)

and store it in a RenderTarget2D

depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth,
                viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24, samples, RenderTargetUsage.DiscardContents);

But somehow, when I draw that (Single), I get this result:
(displaying rgb)

enter image description here

channel r:

(all black)

channel g:

enter image description here

channel b:

enter image description here

channel a:

enter image description here

Why is there data in every color channel althrough i only defined a value on the red parameter (and 1 on a, but you can somehow see the outlines of the model, so it cant be all 1)

And surprisingly, the b channel looks almost like what I am looking for, but has strange lines in it which stay at the same distance to the camera if I move it.

enter image description here

Deferred Lighting – How to map to the generated texture?

$
0
0

I’m trying to implement deferred lighting and I have done the first and second pass but I’m stuck on the third as I don’t know how to map from the current pixel being drawn to the generated texture with the light.

Generated texture with light calculated
enter image description here

Current picture
enter image description here

I would appreciate it if someone could help me figure out how to map my generated texture with the current pixel I’m drawing.

I don’t know if it matters, but I’m working with DirectX but an explantion with OpenGL would also be welcomed.

Can't read .cso files but I can read their .hlsl versions?

$
0
0

Well I’ve been trying to read a .cso file to use as a shader for a DirectX program I’m currently making.

Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it’s only the .cso files I can’t read.

I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files.

What I’m currently doing is:

vector<byte> Read(string File){
    vector<byte> Text;
    fstream file(File, ios::in | ios::ate | ios::binary);

    if(file.is_open()){
        Text.resize(file.tellg());
        file.seekg(0 , ios::beg);
        file.read(reinterpret_cast<char*>(&Text[0]), Text.size());
        file.close();
    }        

    return Text;
};

If I then implement it.

Read("VertexShader.hlsl"); //Works
Read("VertexShader.cso"); //Doesn't Works?!?!

And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong.

(I’ve also asked this at stack overflow but still no answers.)

Can't read .cso files but I can read their .hlsl versions?

$
0
0

Well I’ve been trying to read a .cso file to use as a shader for a DirectX program I’m currently making.

Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it’s only the .cso files I can’t read.

I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files.

What I’m currently doing is:

vector<byte> Read(string File){
    vector<byte> Text;
    fstream file(File, ios::in | ios::ate | ios::binary);

    if(file.is_open()){
        Text.resize(file.tellg());
        file.seekg(0 , ios::beg);
        file.read(reinterpret_cast<char*>(&Text[0]), Text.size());
        file.close();
    }        

    return Text;
};

If I then implement it.

Read("VertexShader.hlsl"); //Works
Read("VertexShader.cso"); //Doesn't Works?!?!

And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong.

EDIT #1:
When I read the .hlsl file the vector get’s filled whit what was written in the file.
For the .cso ones the function doesn’t even pass the “if(file.is_open)” test and stays empty.


Can't sample texture in HLSL using DX11

$
0
0

Environment:

  • Windows 7 x64
  • Visual Studo 2012
  • DirextX11
  • HLSL Shader Model 5
  • Ogre 1.9

Okay, so I’m trying to sample a texture in my pixel shader but I’m coming across some strange problems. Here’s the pixel shader:

struct PS_INPUT {
    float4 pos : SV_POSITION;
    float2 uv  : TEXCOORD0;
};

Texture2D tex0;
SamplerState s0;

float4 pixel_shader(PS_INPUT input) : SV_TARGET {
    float3 texSample = tex0.Sample(s0, input.uv);

    float4 color = float4(1,1,1,1);

    color.xyz = texSample;

    return color;
}

If you want / need it here’s the ogre .material definition:

material simpleMat
{
    technique
    {
        pass simpleMat
        {
            vertex_program_ref vertex_shader {}
            fragment_program_ref pixel_shader {}
        }

        texture_unit 0
        {
            texture diffuse.png 2d
            tex_address_mode wrap
        }
    }
}

This produces mostly black on my object, but I get 2 random white pixels. But here’s the strangest part. When I debug the pixel shader in visual studio texSample nearly always equals zero. So I decided to try something crazy, I debugged the program, got the value of input.uv and then modified the shader to use these values explicitly. When I do this texSample actually contains the correct color value. So yeah… any help is much appreciated!

Rojuinex

EDIT

Here is a snapshot that shows the issue in detail:

http://www.pasteall.org/pic/show.php?id=74835

EDIT 8/4/2014

Upon further investigation I have discovered that the further the object is from the camera, the darker the texture becomes. If the camera is right up against the object, the texture has the right color values, but as it moves away the sampler returns darker and darker values until they are black.

Vertex definitions and shaders

$
0
0

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3′s in it and ties it to a shader which expects a float4 … why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) …

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    output.pos = mul(inPos, mul(World, ViewProjection));
    output.tex = uv;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly …

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));

marshaling c# struct with array to const buffer

$
0
0

I am trying to use a const buffer to pass a structure with an array of values into the pixel shader. However, all my color[] array values are coming in as zeros.
I have the struct defined as:

[StructLayout(LayoutKind.Explicit, Size = 1056)]
public struct MyColorMap
{
    public MyColorMap(float min, float max, Color4 nanColor)
    {
        Min = min;
        Max = max;
        NanColor = nanColor;
        ColoMap = new Color4[64];
    }

    [FieldOffset(0), MarshalAs(UnmanagedType.ByValArray, SizeConst = 64)]
    public Color4[] ColoMap;

    [FieldOffset(1024)]
    public Color4 NanColor;

    [FieldOffset(1040)]
    public float Min;

    [FieldOffset(1044)]
    public float Max;

    public static int Size
    {
        get { return Marshal.SizeOf(typeof(MyColorMap)); }
    }

}

The cbuffer in the HLSL matches as so:

cbuffer colormap :register(b1)
{
    float4 cmap[64];
    float4 nanColor;
    float cmapMin;  
    float cmapMax;      
};

In my C# code i initialize a structure and write some colors into the MyColorMap.Color map array. I pass this through by creating a Buffer and using Updatesubresource, but when I use any of the color map array values inside the pixel shader, they are all zeros, and I get a black screen.

I am getting some parts of the buffer through — the “nanColor” value is coming through fine, as are the min and max floats.

Any advice on how to pass an array of colors through to the pixel shader? (Or, hints what I am doing wrong).

TIA.

Depth Map not rendering properly in DirectX / HLSL / SharpDX

$
0
0

I’ve been struggling with this for awhile, and everything I find online says this SHOULD be working, but I apparently missed something.

I’m attempting to run Deferred Rendering in SharpDX on a Store App. I have a main effect that outputs my color, normal and depth in three render targets. My color and normal look good, but I can’t get the depth to render properly, and I think it’s my position calculations in the Vertex Buffer. Of course, when I debug this in VS2013 Graphics Analyzer, all the data is ‘optimized’ away, so I can’t see my actual values.

As you can see in the Depth Target, every pixel is getting output as 1 in the red channel, instead of the actual depth values. The horizon you see is the ‘ground’ getting clipped by the farplane, so the depth target should show the full extent

enter image description here

Here’s my effect

float4x4 World;
float4x4 View;
float4x4 Projection;
float specularIntensity = 0.8f;
float specularPower = 0.5f;
texture2D Texture;
SamplerState TextureSampler
{
    AddressU = Wrap;
    AddressV = Wrap;
};
struct VertexShaderInput
{
    float4 Position : SV_POSITION;
    float3 Normal : NORMAL0;
    float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
    float4 Position : SV_POSITION;
    float2 TexCoord : TEXCOORD0;
    float3 Normal : TEXCOORD1;
    float2 Depth : TEXCOORD2;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    float4x4 wvp = mul(mul(World, View), Projection);
    output.Position = mul(float4(input.Position), wvp);
    output.TexCoord = input.TexCoord;
    output.Normal = mul(input.Normal, World);
    output.Depth.x = output.Position.z;
    output.Depth.y = output.Position.w;
    return output;
}
struct PixelShaderOutput
{
    float4 Color : SV_TARGET0;
    float4 Normal : SV_TARGET1;
    float Depth : SV_TARGET2;
};
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
    PixelShaderOutput output;
    output.Color = Texture.Sample(TextureSampler, input.TexCoord);
    output.Color.a = specularIntensity;
    output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f);
    output.Normal.a = specularPower;
    output.Depth = input.Depth.x / input.Depth.y;
    return output;
}
technique Technique1
{
    pass Pass1
    {
        VertexShader = compile vs_4_0 VertexShaderFunction();
        PixelShader = compile ps_4_0 PixelShaderFunction();
    }
}

XNA HLSL UV Mapping

$
0
0

I was testing my hlsl lighting shader, I’ve copied it from a tutorial and it works perfectly, but all the meshes in the model needs to have texture coordinate, I guess this is because this part:

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;

    float4 worldPosition = mul(input.Position, World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);

    output.WorldPosition = worldPosition;

    output.UV = input.UV; // <--- this line

    output.Normal = mul(input.Normal, World);

    return output;
}

With this code I get an error: “The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing.”

I have a model with many meshes and it would be much easier change the code than map all the meshes.

So I’ve replaced that line with this:

output.UV = float2(0,0);

and the error disappeared, then I made this:

if (TextureEnabled == true)
    output.UV = input.UV;
else
    output.UV = float2(0,0);

Then in the game code I pass the parameter: Parameter["TextureEnabled"].SetValue(false) in all the model meshes, just for testing, but the error reappeared! How it can reappear if the shader will never reach that line? What am I doing wrong?

In XNA 4, how can I access SpriteBatch's transformMatrix in my shader?

$
0
0

I would like to use a custom effect with a regular XNA SpriteBatch. I have a 2D camera which computes a transform matrix, so I need my shader to take this into account.

I have put a world matrix property into my shader:

float4x4 World;

However, it does not get set by SpriteBatch:

spriteBatch.Begin(spriteSortMode, blendState, samplerState,
    depthStencilState, rasterizerState, effect, camera.WorldToScreen);

Everything is rendered properly if I set it manually in the draw loop:

effect.Parameters["World"].SetValue(camera.WorldToScreen);

How can I set up my shader parameters to make SpriteBatch set them up correctly?

Shader – Calculate depth relative to Object

$
0
0

I am trying to calculate depth relative to the object.
Here is a good solution to retrieve depth relative to camera : Depth as distance to camera plane in GLSL

varying float distToCamera;

void main()
{
    vec4 cs_position = glModelViewMatrix * gl_Vertex;
    distToCamera = -cs_position.z;
    gl_Position = gl_ProjectionMatrix * cs_position;
}

With this example the depth is relative to the camera.
But I would like get the depth relative to the object. I would like the same depth and value if I am near from the object or if I am far.

Here is an example of what I am trying to achieve. On the left you can see that the depth is relative to the camera. And on the right even if the camera moves back from the object, the depth remains the same because it is dependant to the object.

enter image description here


HLSL SampleCmp compile error

$
0
0

When trying to compile the following HLSL:

Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH);
SamplerState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH);

// ...

float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos);
float viewDepth = projCoords.z - DEPTH_BIAS;
projCoords.z = float(index);
float visibilty = gShadowmap.SampleCmp(gShadowmapSampler, projCoords, viewDepth).r;

visual studio gives me:

error X3013: 'SampleCmp': no matching 3 parameter intrinsic method
error X3013: Possible intrinsic methods are:
error X3013: Texture2DArray<float4>.SampleCmp(SamplerComparisonState, float3|half3|min10float3|min16float3, float1|half1|min10float1|min16float1)
error X3013: Texture2DArray<float4>.SampleCmp(SamplerComparisonState, float3|half3|min10float3|min16float3, float1|half1|min10float1|min16float1, int2)
error X3013: Texture2DArray<float4>.SampleCmp(SamplerComparisonState, float3|half3|min10float3|min16float3, float1|half1|min10float1|min16float1, int2, float1|half1|min10float1|min16float1)
error X3013: Texture2DArray<float4>.SampleCmp(SamplerComparisonState, float3|half3|min10float3|min16float3, float1|half1|min10float1|min16float1, int2, float1|half1|min10float1|min16float1, out uint status)

I believe I match the first method, so why is this?

2D HLSL World position

$
0
0

I’m trying to get world position from my vertex shader to my pixel shader so that I can disable the shader once a preset X coordinate has been passed (no shading once I’m over X).

Getting the screen position is not a problem so far but despite my best efforts to look after it and implement examples the calculations just don’t return the preferred world positions I’m looking for.

Update: So got it to somewhat work, after compiling the shaders the output changes to such:Could anyone explain this?

Could anyone explain why this happens?

I should mention that I’m really new to HLSL, been only scripting so far.

Edit:Added matrices.

world = Matrix.Identity;
view = Matrix.CreateScale(new Vector3(1, 0.75f, 0)) * Matrix.CreateTranslation(-playerpos.X, -playerpos.Y, 1); 
projection = Matrix.CreateOrthographicOffCenter(0, view.Width, view.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
projection = halfPixelOffset * projection; <code>

texture lightMask;
sampler mainSampler : register(s0);
sampler lightSampler = sampler_state{Texture = lightMask;};
float4x4 World;
float4x4 View;
float4x4 Projection;

struct vs2ps
{
float4 Pos : POSITION0;
float4 TexCd : TEXCOORD0;
float3 PosW : TEXCOORD1;
};

vs2ps VS(float4 Pos : POSITION0,float4 TexCd : TEXCOORD0)
{
vs2ps Out;
Out.Pos = mul(Pos, World*View*Projection);
Out.TexCd = TexCd;
Out.PosW = mul(Pos, World);
return Out;
}

float4 PixelShaderFunction(vs2ps input) : COLOR0
{ float2 texCoord = input.TexCd;
float4 screenPosition = (input.PosW,1.0f);
float4 lightColor = tex2D(lightSampler, texCoord);
float4 mainColor = tex2D(mainSampler, texCoord);
if(screenPosition.x < 3500)
{
return (mainColor * lightColor);
}
else return mainColor;
}

HLSL Shader Relative Positioning

$
0
0

I’ve got a shader that does the texturing, lighting, etc. for my game engine (written on top of MonoGame, in case that’s relevant) for my block-terrain-based game (everything is rendered as triangles and each chunk has its own mesh made from logical block objects, recomputed when relevant). I have two versions of this shader, a SM2 one and an SM4 one (for different targets), both do the same thing and are almost identical but I primarily focus on the SM2 one since that is for the primary target.

Anyway, a week or so ago I implemented directional lighting, and it looks fine.. except it seems to fail to take the distance from the light source into account – i.e., every block looks like it’s an equal distance from the light source.

I tried setting up some obvious solutions – I was already calculating the direction of the light using a parameter of the light position with the vertex position, so I took an intermediate (pre-normalization) value of this and used it as the distance to modulate the intensity of the light. Sounds good, yeah? Well, it somehow made no difference at all. I can’t seem to find a piece of software to help me debug a vertex/pixel shader combo (only lone pixel shaders), so I’m basically stuck thinking through the logic and trying things. That’s a familiar debug style for me, but I’ve been trying for a week or two and haven’t come up with anything that has made a difference.

I’ve since stripped this out of the shader (since it didn’t work, and I wanted to just work on other things for a while), but I was calculating the distance in the vertex shader (and passing it to the pixel shader) like this:

float3 final3DPos = mul(inPos, World);
float3 direction = -(final3DPos - LightPosition);
Output.LightDirection = direction;
Output.LightDistance = length(direction);

I also tried setting LightDistance to (abs(distance.x) + abs(distance.z)), with distance being what in the above is LightDistance – I ignored Y since it’s constant and figured it would exaggerate the difference between positions.. same effect!

In the pixel shader, LightDirection is normalized before use.

I tried setting the end result as a color made from the distance, and everything ended up black. I tried dividing this by various values in case the numbers were too big and multiplying by 255, and no matter what i did everything was the same color.

I tried just using it in the calculations by dividing by it or deriving a multiplier to use from it, and again every block was lit the same.

I could pass the distance to the light in with the vertex information, but calculating that and sending it in for EVERY VERTEX seems really stupid and unnecessary.. :(

How do I sample a Depth/Stencil Texture in HLSL?

$
0
0

I am shadow mapping in Direct3D 9. I’m trying to avoid rendering depth to a 32-bit render target. So, I’ve created a depth/stencil texture( a texture w/usage Depth/Stencil ). When I render I do this:

//device
IDirect3DDevice9 *pd3dDevice = GetDevice();

// set an rt the same size as the depth/stencil texture(needs attention...this render target will not be rendered to)
pd3dDevice->SetRenderTarget(0, GetShadowMapRT());

// set the shadow map depth/stencil texture surface
pd3dDevice->SetDepthStencilSurface(GetShadowMapDSSurface());

pd3dDevice.SetStreamSource(0, GDEOctree.Geometry,
                Constants.OctreeVertexSize * GDEOctree.StaticGeometryVertexCount, 12);

// set vertex format
pd3dDevice.VertexFormat = SlimDX.Direct3D9.VertexFormat.Position;

// set indices
pd3dDevice.Indices = GDEOctree.ShadowCasterIndices;

// set render states
DeviceManager.SetRenderState(RenderState.ZEnable, 1);
DeviceManager.SetRenderState(RenderState.ZWriteEnable, 1);
DeviceManager.SetRenderState(RenderState.ZFunc, (int)Compare.LessEqual);
DeviceManager.SetRenderState(RenderState.CullMode, (int)Cull.Counterclockwise);
DeviceManager.SetRenderState(RenderState.StencilEnable, 0);
DeviceManager.SetRenderState(RenderState.AlphaBlendEnable, 0);
DeviceManager.SetRenderState(RenderState.ColorWriteEnable, 0);


// draw shadow casting geometry
...

// get the effect
ID3DXEffect *pFX = GetFX();

// set the shadow map depth/stencil texture
pFX->SetTexture(GetShadowMapDSTexture());

// in pixel shader sample shadow map...
?

The problem is that the depth/stencil texture is a D24SX format…How do I sample from a texture of this format?

My original attempt which of course will not work:

tex2D(g_SamplerShadowMap, vShadowMapUV);

I read a post here that says:

When reading from the texture, one extra component in texture
coordinates will be the depth to compare with.

Or, how do I convert the color returned from the sampler code above to a single floating point value? The above code will return a 4-component vector but depth should be one value for comparison.

EDIT:

Below is the vertex shader for the shadow map pass…I do not compile a pixel shader for the shadow map pass because color is not written…only depth.

//--------------------------------------------------------------//
// ShadowMap Pass
//--------------------------------------------------------------//
void VS_ShadowMap( in float4 Position : POSITION0, 
                    out float4 oPos : POSITION0 )
{
    // position of geometry in the shadow emitting light's clip space
    oPos = mul(mul(Position, mW), g_mShadowViewProjection);
}

technique DeferredShading
{
    pass ShadowMap
    {
        VertexShader = compile vs_3_0 VS_ShadowMap();
        PixelShader = NULL;
    }   
}

HLSL: An array of textures and sampler states

$
0
0

The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn’t have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn’t work. Also I can’t use Texture.Sample since this is shader model 2.0.

//shader model 2.0 (DX9)    
texture subTextures[255];
SamplerState MeshTextureSampler(int i)
{
    Texture = (subTextures[i]);
};

float4 SampleCompoundTexture(float2 texCoord, float4 diffuse)
{
    float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord);
    int i = SelectedColor.a;
    texture SelectedTx = subTextures[i];
    return tex2D(MeshTextureSampler(i), texCoord) * diffuse;
}
Viewing all 69 articles
Browse latest View live