Quantcast
Channel: Question and Answer » hlsl
Viewing all 69 articles
Browse latest View live

Encoding Floats to RGBA and Blending causing artifacts

$
0
0

I am using float packing to encode a float value inside a RGBA texture because I don’t have access to float texture unfortunately.
Here are the function I am using for encoding and decoding.

inline float4 EncodeFloatRGBA( float v ) {
  float4 enc = float4(1.0, 255.0, 65025.0, 160581375.0) * v;
  enc = frac(enc);
  enc -= enc.yzww * float4(1.0/255.0,1.0/255.0,1.0/255.0,0.0);
  return enc;
}

inline float DecodeFloatRGBA( float4 rgba ) {
  return dot( rgba, float4(1.0, 1/255.0, 1/65025.0, 1/160581375.0) );
}

This works pretty well. But when I am using blending to obtain transparency, the result because totally strange when I have some faces that are overlapping.

Let’s imagine two grey quad, as I am using additive blending, if they overlap, I should obtain some black, but with this packing system, I am getting strange results.
Here is an example.

enter image description here

Any idea how to solve this ?


How can I add sphere falloff to my specular lighting implementation?

$
0
0

I’m using point light in my game and tried to add specular lighting. It looks good but when standing close to wall player can clearly see where lights end.

Image from front:
Image from front

Image when near wall:
Image when near wall

I have simillar effect in Blender after putting a point light near a wall. In the light options, there is toggle for “sphere falloff.” Maybe I need something similar? How could I add such a falloff to my implementation?

Here is my shader code where I compute the lighting:

struct MESH_OUTPUT
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD0;
    float3 normal : TEXCOORD1;
    float3 posWorld : TEXCOORD3;
    float3 viewDir : TEXCOORD6;
};
//******************************************************************************
float4 ps_mesh(in MESH_OUTPUT In) : COLOR0
{
    // some removed code here
float4 tex;
float3 normal;

    float specularIntensity = tex2D(samplerSpecular, In.tex);
    float specular = 0;
    float lightIntensity = 0;

        float3 diffuse = float3(0,0,0);
        for(int i=0; i<1; ++i) // only 1 light for testing
        {
            float3 lightVec = normalize(lights[i].pos - In.posWorld);
            float dist = distance(lights[i].pos, In.posWorld);
            float light = clamp(dot(lightVec, normal),0,1) * clamp((1-(dist/lights[i].pos.w)),0,1);
                if(light > 0)
                {
                    float3 reflection = normalize(light*2*normal - lightVec);
                    specular += pow(saturate(dot(reflection, normalize(In.viewDir))), 10) * specularIntensity;
                }           
            diffuse += light * lights[i].color;
            lightIntensity += light;
        }
        lightIntensity = saturate(lightIntensity);
        specular = saturate(specular);

    tex = float4(saturate((tex.xyz * (ambientColor + diffuse) * 0.6) + float3(1,1,1) * specular * 0.5), tex.w);

    return tex;
}

Compiling a shader with fxc results in invalid ps_5_0 output semantic 'COLOR0'

$
0
0

I’m attempting to compile a shader at the command prompt. What am I doing wrong that would make it generate this error?

fxc /Od /Zi /T ps_5_0 /E "ps_main" /Fo "basic.pso" "basic.ps"

Here is the pixel shader

struct VS_OUTPUT
{
    float4 Color    : COLOR0;
};


float4 ps_main(VS_OUTPUT input) : COLOR0
{
    return input.Color;
}

The return is:

<path here>basic.ps(8,33-40): error X4502: invalid ps_5_0 output semantic 'COLOR0'

compilation failed; no code produced

HLSL – Binary operations

$
0
0

I’m trying to do binary operations with integers in the hlsl code. For example:

int n = 10 & 15;

Binary value of 10 = 1010 and Binary value of 15 = 1111.

With this, n = 10, because (1010 & 1111) = 1010. This is what I need, but if I declare the int values before, I get an error. For example:

int n1 = 10;
int n2 = 15;
int n = n1 & n2;

this shows the error: “error X3535: Bitwise operations not supported on legacy targets.”

I made this simple code to help understanding, because I have no idea what is causing this.

I already tried to change the variables type to uint, but the error remains.

I’m using deferred lighting in xna and I want to store the emissive light intensity together with the specular power.

In XNA 4, how can I access SpriteBatch's transformMatrix in my shader?

$
0
0

I would like to use a custom effect with a regular XNA SpriteBatch. I have a 2D camera which computes a transform matrix, so I need my shader to take this into account.

I have put a world matrix property into my shader:

float4x4 World;

However, it does not get set by SpriteBatch:

spriteBatch.Begin(spriteSortMode, blendState, samplerState,
    depthStencilState, rasterizerState, effect, camera.WorldToScreen);

Everything is rendered properly if I set it manually in the draw loop:

effect.Parameters["World"].SetValue(camera.WorldToScreen);

How can I set up my shader parameters to make SpriteBatch set them up correctly?

Vertex definitions and shaders [closed]

$
0
0

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3′s in it and ties it to a shader which expects a float4 … why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) …

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    output.pos = mul(inPos, mul(World, ViewProjection));
    output.tex = uv;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly …

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));

“_SRGB” suffix for BC texture format doesn't result in sRGB-to-linear correction at sampling

$
0
0

I am working on a 3D engine as a hobby (Direct3D 11). Currently I am trying to implement sRGB -> linear -> sRGB color space conversions via texture formats with “_SRGB” suffix. So, my textures are supposed to be sRGB images (for example, DDS-files compressed in BC1_UNORM_SRGB format), output is also gamma-corrected (thanks to R8G8B8A8_UNORM_SRGB frame buffer) and all shader calculations are made in linear color space.

The problem is, whether I use sRGB or non-sRGB format for the DDS-file, after sampling the texture I get exactly the same color values inside the shader that are stored in original image. But if I get it correctly, sampler should apply implicit gamma correction (pow(color, 2.2f)) for the input values. So, for example, if I want to output the same color as is in the map (lets say, 0.5f for R channel), I sample it from the texture (0.5f becomes 0.218f after transferring to linear color space by sampler), then I do nothing with it inside the shader and send to output. As the frame buffer has sRGB format, merger (or some other part of pipeline, that does it) will apply gamma re-correction to our value (pow(color, 1.0f / 2.2f)), and 0.218f will become 0.5f again. I will get the same image as I had on the input.

The output colors are definitely gamma corrected, as I see clear difference when I change frame buffer format from sRGB to non-sRGB. But as no input correction is applied, the final image looks brightened comparing to the input one. As I said, I checked the value in pixel shader after texture.Sample() call, and it is exactly the same as colorpicker shows for the source texture. When I swap the texture format to non-sRGB, nothing changes (I use Visual Studio 2013 Update 4 for format changing).

To load DDS files I use DDSTextureLoader. I also called Direct3D methods directly to create resource and view from the file, but nothing changed. Both resource and shader resource view have “_SRGB” format, as I can see in Graphics Debugger, so they are definitely sRGB ones.

I’ve read at MSDN that in Direct3D 11 setting texture format is enough for sampler to recognize sRGB image. Is there something I am doing wrong or missing, or understand incorrectly? Maybe someone had similar issues? Any advice would be highly appreciated!

What will happen if the argument of mix() or clamp() is above 1 or below 0?

$
0
0

There’s two magnificent intrisincs: mix() in GLSL and clamp() in HLSL, which are used to implement linear interpolation. Let’s say we have a variable:

float v = ?; // where ? can be [-FLOAT_MAX, +FLOAT_MAX]

and then we do:

gl_FragColor = mix(value1, value2, v);

So, the question is: does it works the correct way under GL or DirectX? Should I EXPLICITLY normalize the value of v like this:

gl_FragColor = mix(value1, value2, clamp(v, 0.0, 1.0));

Blending Lightmaps and Dynamic Texture Shadows in HLSL

$
0
0

I’m using Gile[s] as my lightmapper and my engine can execute HLSL scripts with DirectX 9.

I would like to accomplish something like this for performance reasons. I was told that this technique can only be done by shaders.

Lightmap and Shadow Blending

Someone has provided me with a quick code example and starting code but it is still far from being implemented.

lightmapColor*=shadowmapValue;
colorIntensity = (lightmapColor.r+lightmapColor.g+lightmapColor.b)/3.0;
lightCol = (1-colorIntensity)*ambient+(colorIntensity)*lightmapColor;

Not sure if it is applicable in my engine so I am looking for some guidance and perhaps some basic hlsl code to accomplish this. My lightmap texture uses the UV2 Channel if it helps. Thanks.

Transparency using HLSL in XNA

$
0
0

I currently working on Depth Data on Kinect SDK v1.8 on XNA and I wanna show an Image Inside the Depth view of Human body. the image below is just an example of what I wanna do :

http://static.gamespot.com/uploads/original/1535/15354745/2429785-screen4.jpg

for Depth View, this is what I’ve done :

void kinectSensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
    using (DepthImageFrame depthImageFrame = e.OpenDepthImageFrame())
    {
        if (depthImageFrame != null)
        {
            short[] pixelsFromFrame = new short[depthImageFrame.PixelDataLength];

            depthImageFrame.CopyPixelDataTo(pixelsFromFrame);
            byte[] convertedPixels = ConvertDepthFrame(pixelsFromFrame, ((KinectSensor)sender).DepthStream, 640 * 480 * 4);

            Color[] color = new Color[depthImageFrame.Height * depthImageFrame.Width];
            kinectRGBVideo = new Texture2D(graphics.GraphicsDevice, depthImageFrame.Width, depthImageFrame.Height);

            // Set convertedPixels from the DepthImageFrame to a the datasource for our Texture2D
            kinectRGBVideo.SetData<byte>(convertedPixels);
        }
    }
}


// Converts a 16-bit grayscale depth frame which includes player indexes into a 32-bit frame
// that displays different players in different colors
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length)
{
    int tooNearDepth = depthStream.TooNearDepth;
    int tooFarDepth = depthStream.TooFarDepth;
    int unknownDepth = depthStream.UnknownDepth;
    byte[] depthFrame32 = new byte[depthFrame32Length];

    for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
    {
        int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask;
        int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;

        // transform 13-bit depth information into an 8-bit intensity appropriate
        // for display (we disregard information in most significant bit)
        byte intensity = (byte)(~(realDepth >> 4));

        if (player == 0 && realDepth == 0)
        {
            // white 
            depthFrame32[i32 + RedIndex] = 255;
            depthFrame32[i32 + GreenIndex] = 255;
            depthFrame32[i32 + BlueIndex] = 255;
        }
        else if (player == 0 && realDepth == tooFarDepth)
        {
            // dark purple
            depthFrame32[i32 + RedIndex] = 66;
            depthFrame32[i32 + GreenIndex] = 0;
            depthFrame32[i32 + BlueIndex] = 66;
        }
        else if (player == 0 && realDepth == unknownDepth)
        {
            // dark brown
            depthFrame32[i32 + RedIndex] = 66;
            depthFrame32[i32 + GreenIndex] = 66;
            depthFrame32[i32 + BlueIndex] = 33;
        }
        else
        {
            // tint the intensity by dividing by per-player values
            depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]);
            depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]);
            depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]);
        }
    }

    return depthFrame32;
}

I’m not sure how I can get the Object inside the Depth View of the body.
Update : I found out I can use HLSL to achieve this for a 3D model :

float4x4 World;
float4x4 View;
float4x4 Projection;
float4x4 WorldInverseTranspose;

float4 AmbientColor = float4(1, 1, 1, 1);
float AmbientIntensity = 0.1;

float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;

float Shininess = 200;
float4 SpecularColor = float4(1, 1, 1, 1);
float SpecularIntensity = 1;
float3 ViewVector = float3(1, 0, 0);

float Transparency = 0.5;

texture ModelTexture;
sampler2D textureSampler = sampler_state {
    Texture = (ModelTexture);
    MinFilter = Linear;
    MagFilter = Linear;
    AddressU = Clamp;
    AddressV = Clamp;
};

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float4 Normal : NORMAL0;
    float2 TextureCoordinate : TEXCOORD0;
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float4 Color : COLOR0;
    float3 Normal : TEXCOORD0;
    float2 TextureCoordinate : TEXCOORD1;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;

    float4 worldPosition = mul(input.Position, World);
        float4 viewPosition = mul(worldPosition, View);
        output.Position = mul(viewPosition, Projection);

    float4 normal = normalize(mul(input.Normal, WorldInverseTranspose));
        float lightIntensity = dot(normal, DiffuseLightDirection);
    output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity);

    output.Normal = normal;

    output.TextureCoordinate = input.TextureCoordinate;
    return output;
}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
    float3 light = normalize(DiffuseLightDirection);
    float3 normal = normalize(input.Normal);
    float3 r = normalize(2 * dot(light, normal) * normal - light);
    float3 v = normalize(mul(normalize(ViewVector), World));
    float dotProduct = dot(r, v);

    float4 specular = SpecularIntensity * SpecularColor * max(pow(dotProduct, Shininess), 0) * length(input.Color);

        float4 textureColor = tex2D(textureSampler, input.TextureCoordinate);
        textureColor.a = 1;

    float4 color = saturate(textureColor * (input.Color) + AmbientColor * AmbientIntensity + specular);
        color.a = Transparency;
    return color;
}

technique Textured
{
    pass Pass1
    {
        AlphaBlendEnable = TRUE;
        DestBlend = INVSRCALPHA;
        SrcBlend = SRCALPHA;
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

and this is my Draw Code in Game1 Class:

protected override void Draw(GameTime gameTime)
    {
        GraphicsDevice.Clear(Color.Black);

        DrawModelWithEffect(model, world, view, projection);

        base.Draw(gameTime);
    }

    private void DrawModelWithEffect(Model model, Matrix world, Matrix view, Matrix projection)
    {
        foreach (ModelMesh mesh in model.Meshes)
        {
            foreach (ModelMeshPart part in mesh.MeshParts)
            {
                part.Effect = effect;
                effect.Parameters["World"].SetValue(world * mesh.ParentBone.Transform);
                effect.Parameters["View"].SetValue(view);
                effect.Parameters["Projection"].SetValue(projection);
            }
            mesh.Draw();
        }
    }

My problem is that I want to use this same idea to create a transparency between two 2D Images.

Is there something like this out there for 2D? If so where can I find it.

How is this particular HLSL condition treated with respect to compile- or run-time evaluation?

$
0
0

Let’s say I have this very simple pixel shader (cbuffers and other stuff omitted)

float4 PS(VertexOut pin, uniform bool useLighting) : SV_Target {
    float4 retColor = gDiffuseMap.Sample( sampler0, pin.Tex );
if (useLighting) {
    retColor = retColor * float4(gAmbientLight, 1.0f);
}
    return retColor;
}

and two techniques such as

technique11 TexTech {
    pass P0 {
        SetVertexShader( CompileShader( vs_4_0, VS()));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader( ps_4_0, PS(false)));
    }
}

technique11 TexLitTech {
    pass P0 {
        SetVertexShader( CompileShader(vs_4_0, VS()));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, PS(true)));
    }
}

The way I understand it, the useLighting condition is evaluated during compile-time and each technique will have its own version of the pixel shader function without any branching. That means the useLighting condition wouldn’t have any runtime penalties. Is that correct? So it’s kind of like C preprocessing?

Why can the pin variable just be left out like that in the CompileShader call? It makes sense, of course, I’m just wondering if this is some special HLSL or Effect Framework syntax?

Can someone explain to me how setting shader parameters aren't a bottleneck?

$
0
0

I am trying to understand this. So I have a bunch of models that need to be rendered. Each model most likely has various “sub meshes” with their own diffuse, specular, etc textures. So for each of these models I have to loop through set the diffuse, specular, etc and then a draw call. Now lets say I do this a couple times for each model, and I have 100 models in a scene. Since my C++ code that sets these per model parameters is called by the CPU, doesn’t it have to bus all that data to the GPU a ridiculous amount of times. Even if it doesn’t and the GPU is caching, wouldn’t the CPU still need to make several inquires to make sure the correct resources are cached? I am asking because I have a, single, high res model I am rendering and once textures are applied I drop from 1000 FPS to 70. This model has about 20 different textures, and isn’t something that will be used in a game, solely using it to stress test and locate bottlenecks. Thanks!

Geometry shader: Dynamic output?

$
0
0

I’m currently using a geometry shader to generate grass blades out of single root points that are layed out in a grid. For each root point, I generate a grass blade with, right now, a constant number of vertices.

However, for level of detail, I would like to generate a number of vertices depending on the distance to the camera. Now, since there are no dynamic arrays, I tried to declare multiple techniques which call the geometry shader with the number of vertices.

I would then be able to divide my grass into patches/smaller grids, calculate the distance to the camera for this patch and then call the technique with the appropriate number of vertices.

The HLSL part looks something like this:

[maxvertexcount(24)]
void GS_LOD1(point GEO_IN points[1], inout TriangleStream<GEO_OUT> output) 
{
    GS_Shader(points, 8, output);
}

[maxvertexcount(24)]
void GS_LOD2(point GEO_IN points[1], inout TriangleStream<GEO_OUT> output)
{
    GS_Shader(points, 16, output);
}

technique LevelOfDetail1
{
    pass Pass1
    {
        VertexShader = compile vs_4_0 VS_Shader();
        GeometryShader = compile gs_4_0 GS_LOD1();
        PixelShader = compile ps_4_0 PS_Shader();
    }
}

technique LevelOfDetail2
{
    pass Pass1
    {
        VertexShader = compile vs_4_0 VS_Shader();
        GeometryShader = compile gs_4_0 GS_LOD2();
        PixelShader = compile ps_4_0 PS_Shader();
    }
}

And the definition of the GS function:

void GS_Shader(point GEO_IN points[1], in const int realVertexCount, inout TriangleStream<GEO_OUT> output) 
{
  [...]
  GEO_OUT v[realVertexCount];

However, even this way the compiler complains:

array dimensions must be literal scalar expressions

Is this possible in any way? I guess what would is just writing several geometry shaders that basically do the same thing but with a already defined number of vertices – this sounds a bit messy though.

Thanks!

Very subtle HLSL syntax change causes compliation error

$
0
0

The following HLSL works and compiles:

texture2D renderTarget;
float h; // declared here

sampler GetRenderTarget = sampler_state
{
    texture = <renderTarget>;
};

float3 GetHsvFromRgb(float3 c)
{
    float4 k = float4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
    float4 p = lerp(float4(c.bg, k.wz), float4(c.gb, k.xy), step(c.b, c.g));
    float4 q = lerp(float4(p.xyw, c.r), float4(c.r, p.yzx), step(p.x, c.r));
    float d = q.x - min(q.w, q.y);
    float e = 1.0e-10;

    return float3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}

float3 GetRgbFromHsv(float3 c)
{
    float4 k = float4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
    float3 p = abs(frac(c.xxx + k.xyz) * 6.0 - k.www);

    return c.z * lerp(k.xxx, clamp(p - k.xxx, 0.0, 1.0), c.y);
}

float4 ShadeVertex(float3 p : POSITION0, inout float2 t : TEXCOORD0) : POSITION0
{
    return float4(p, 1);
}

float4 ShadePixel(float4 p : POSITION0, float2 t : TEXCOORD0) : COLOR0
{
    float4 c = tex2D(GetRenderTarget, t);
    float3 hsv = GetHsvFromRgb(c);

    hsv.x = (int)h; // compiles and works

    float3 rgb = GetRgbFromHsv(hsv);

    return float4(rgb, h);
}

technique Simple
{
    pass FirstPass
    {
        VertexShader = compile vs_3_0 ShadeVertex();
        PixelShader = compile ps_3_0 ShadePixel();
    }
}

However, if I remove the cast from the h variable it causes an exception:

float4 ShadePixel(float4 p : POSITION0, float2 t : TEXCOORD0) : COLOR0
{
    float4 c = tex2D(GetRenderTarget, t);
    float3 hsv = GetHsvFromRgb(c);

    hsv.x = h; // removing int cast causes compilation exception

    float3 rgb = GetRgbFromHsv(hsv);

    return float4(rgb, h);
}

This doesn’t really make any sense to me. The h variable is being set to 1 in code. The int cast is not what I actually want to do, I was just testing various things trying to understand why I couldn’t assign the h variable value directly. Exception I get after removing the int cast is:

(51,18): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression

More confusingly, if I cast to a float, it goes back to not working:

float4 ShadePixel(float4 p : POSITION0, float2 t : TEXCOORD0) : COLOR0
{
    float4 c = tex2D(GetRenderTarget, t);
    float3 hsv = GetHsvFromRgb(c);

    hsv.x = (float)h; // also an exception

    float3 rgb = GetRgbFromHsv(hsv);

    return float4(rgb, h);
}

Not sure what is going on or what I’m doing wrong, any insight?

DirectX Compute shader (HLSL) makes texture black

$
0
0

Hello this is my first question on this forum :)
When using compute shader in directx to change the color to blue the texture only gets black.
I really don’t understand what the problem could be and I would be very thankful if anyone could explain it to me :)

// GLOBALS
ID3D11ShaderResourceView* bth;
ID3D11UnorderedAccessView* bthOut = nullptr;
ID3D11ComputeShader* computeShader = nullptr;
ID3D11UnorderedAccessView* unbindUAV = nullptr;
ID3D11ShaderResourceView* unbindSRV = nullptr;

// DESCRIPTION TO TEXTURE AND UNORDERED ACCESS VIEW
D3D11_TEXTURE2D_DESC bthTexDesc;
ZeroMemory(&bthTexDesc, sizeof(bthTexDesc));
bthTexDesc.Width = BTH_IMAGE_WIDTH;
bthTexDesc.Height = BTH_IMAGE_HEIGHT;
bthTexDesc.MipLevels = 1;
bthTexDesc.ArraySize = 1;
bthTexDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bthTexDesc.SampleDesc.Count = 1;
bthTexDesc.SampleDesc.Quality = 0;
bthTexDesc.Usage = D3D11_USAGE_DEFAULT;
bthTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
bthTexDesc.MiscFlags = 0;
bthTexDesc.CPUAccessFlags = 0;

D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(data));
data.pSysMem = (void*)BTH_IMAGE_DATA;
data.SysMemPitch = BTH_IMAGE_WIDTH * 4 * sizeof(char);

ID3D11Texture2D* bthTex = nullptr;
dev->CreateTexture2D (&bthTexDesc, &data, &bthTex);

D3D11_SHADER_RESOURCE_VIEW_DESC resViewDesc;
ZeroMemory (&resViewDesc, sizeof(resViewDesc));
resViewDesc.Format = bthTexDesc.Format;
resViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
resViewDesc.Texture2D.MipLevels = bthTexDesc.MipLevels;
resViewDesc.Texture2D.MostDetailedMip = 0;
dev->CreateShaderResourceView(bthTex, &resViewDesc, &bth);


D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
ZeroMemory (&uavDesc, sizeof(uavDesc));
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D;
uavDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
uavDesc.Texture2D.MipSlice = 0;
dev->CreateUnorderedAccessView (bthTex, &uavDesc, &bthOut);


// RENDER
devcon->PSSetShaderResources (0, 1, &unbindSRV);

devcon->CSSetShader(computeShader, nullptr, 0);
devcon->CSSetShaderResources(0, 1, &bth);
devcon->CSSetUnorderedAccessViews (1, 1, &bthOut, nullptr);
devcon->Dispatch(25, 25, 1);
devcon->CSSetUnorderedAccessViews (0, 1, &unbindUAV, nullptr);

devcon->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R32_UINT, 0);
devcon->VSSetShader(vertexShader, nullptr, 0);
devcon->HSSetShader(nullptr, nullptr, 0);
devcon->DSSetShader(nullptr, nullptr, 0);
devcon->GSSetShader(geometryShader, nullptr, 0);
devcon->PSSetShader(pixelShader, nullptr, 0);
devcon->PSSetShaderResources(0, 1, &bth);


// COMPUTE SHADER
RWTexture2D<float4> output : register(u0);
Texture2D<float4> inputTex : register(t0);

[numthreads(32, 32, 1)]
void CS_main(uint3 threadID: SV_DispatchThreadID)
{
     int3 texLoc = int3(0, 0, 0);
     texLoc.x = threadID.x;
     texLoc.y = threadID.y;

     float value = inputTex.Load(texLoc);
     output [threadID.xy] = 2.0f * value;
}

Encoding Float to RG/RGBA and Blending

$
0
0

Encoding a float value inside a RG or RGBA texture is very interesting and useful but it is also become quite useless when you use blending as the result and the values might be altered because of face overlapping.
Is there a way to avoid such issues ?

here is an example of a bad result when encoding depth on the red and green channels and just decoding it again in a post-process effect.

enter image description here

How to diagnose the problem when the input assembler and the vertex shader look correrct, but the Output Merger is wrong? [closed]

$
0
0

I’m porting some OpenGL code to Direct X 11. I ended up with nothing being drawn on the screen. I reverted to a simple program, which I am writing about here.

I’m now trying to use the Graphics Tools in Visual Studio to diagnose a very simple “Hello Triangle” program. In my program, I am drawing simple geometry and using a basic shader. The shader just outputs a constant color.

float4 ps_main(VS_OUTPUT Input) : SV_TARGET
{
    return float4(0.2f, 0.2f, 0.2f, 1.0f);
}

However, the Output Merger shows nothing.

(LOOK CLOSELY)
enter image description here

When I click on the output merger, I see a green alpha checkerboard.

enter image description here

What I would like to know ultimately, is what I’m doing wrong. (Why is the output merger basically blank?) Additionally, I’d like to learn some skill in reading outputs like this:

enter image description here

This is the contents of basic.hlsl

cbuffer cbTransform : register( b0 )
{
    matrix matWorldViewProj;
};

struct VS_INPUT
{
    float3 Position     : POSITION0;
    float2 TexCoord     : TEXCOORD0;
    float3 Normal       : NORMAL;
    float4 Color        : TEXCOORD1;
};

struct VS_OUTPUT
{
    float4 Position     : SV_POSITION;
    float2 TexCoord     : TEXCOORD0;    
    float3 Normal       : NORMAL;
    float4 Color        : COLOR0;
};

VS_OUTPUT vs_main( VS_INPUT Input )
{
    VS_OUTPUT Output;
    Output.Position = mul(float4(Input.Position,1), matWorldViewProj);
    Output.TexCoord = Input.TexCoord;
    Output.Normal = mul(Input.Normal, (float3x3)matWorldViewProj);
    Output.Color = Input.Color;
    return( Output );
}

float4 ps_main(VS_OUTPUT Input) : SV_TARGET
{
    return float4(0.2f, 0.2f, 0.2f, 1.0f);
}

Here’s what matWorldViewProj looks like:

enter image description here

Multi-textured terrain XNA-monogame

$
0
0

I’m developing a TerrainEngine for my project and I have some issues with the multi-texturing part.

I already tried some techniques but it doesn’t match with my TerrainEngine’s conception.

My TerrainEngine is made using the concept of Regions, Sectors and tiles.

  • A region contains 16×16 sectors
  • A sector contains 8×8 tiles
  • A tile contains all informations like 4 height points, 4 textures (one on each corner)

So I can create a terrain of 5×5 regions if I want to. But that’s not the point.

My question is, how can I proceed to make an efficient multi-texturing using the RST technique and using XNA/Monogame ?

I’ve heard about a technique named “Texture Atlas” but I don’t no if that’s a good option.
What did you suggest ?

Oh, and I’m using 4 texture samplers in my HLSL.

Multisampled Nearest Filtering in PS 2.0 is it possible?

$
0
0

My game involves blocky, pixelated 3D textures. When using nearest neighbor filtering with a texture sampler, I get the desired result of nicely pixelated textures — with the caveat that hard lines between pixels are aliased. Here’s a screenshot of what I mean:

enter image description here

See the jaggies?

I’d like to know if I can use some kind of screen-space multisampling to fix these jagged edges. I’m using HLSL and pixel shader 2.0. Is it possible to accomplish this in the shader? (Note that MSAA does not work in this case because the aliasing is in the texture sampler, not in rasterizing geometry).

How can I test if one point can “see” another point. (XNA)

$
0
0

I want to test if my enemy can see the player, however I want this to be pixel-perfect.

I already have all of the solid objects drawing into a separate render state. It should be noted that every solid object is changing and warping all over the place constantly. I cannot check individual walls because a distortion effect is applied to all of them. The walls blend together.

Essentially it would lerp between the enemy’s position and the player’s position and see if there are any pixels that have an alpha value of greater than 0.
The rotation of the enemy and the player do not matter.

All of my attempts at doing this on the cpu have worked, however they slowed the game down dramatically. All of my attempts at going this on the gpu just didn’t work at all.

What is the most efficient way of doing this?
Is there any way to do this in the gpu?

Edit: There is no geometry at all. The walls are completely amorphous.

I did not realize that global variables were constant in hlsl until after I wrote this:

sampler s0;
texture tex;
sampler tex_sampler = sampler_state{Texture = tex;};

float2 screenSize;
float2 from;
float2 to;
float2 impact;
int impacted;

float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0  
{  
    from /= screenSize;
    to /= screenSize;

    impacted = 0;

    for (float i = 0; i < 1; i += 1.0f / length(to - from))
    {
        float2 test = lerp(from, to, i);

        if ( (tex2D(tex_sampler, test)).a )
        {
            impacted = 1;
            impact = test * screenSize;
            break;
        }
    }

    return tex2D(s0, coords);
}  

technique Technique1  
{  
    pass Pass1  
    {  
        PixelShader = compile ps_2_0 PixelShaderFunction();  
    }  
} 

The solid pixels would be passed into the “tex” variable.
I would read the “impacted” and “impact” variables after one pass on a 1×1 texture.
hlsl just doesn’t want to let me do this.
Apparently global variables are implicitly constant.
Is there another way of doing this?

on the variable syntax page it says “Global variables are considered const by default (suppress this behavior by supplying the /Gec flag to the compiler).”
How do I add the /Gec flag to the compiler?

I also wrote this, but it slowed the game down:

private bool CanSee(Vector2 from, Vector2 to)
{
    Color[] t = new Color[target_solid_final.Width * target_solid_final.Height];
    target_solid_final.GetData(t);

    for (float i = 0; i < 1; i += 8.0f / (to - from).Length())
    {
        Vector2 test = Vector2.Lerp(from, to, i);
        Point test_p = new Point((int)test.X, (int)test.Y);

        if (viewportRect.Contains(test_p) &&
            t[test_p.X + test_p.Y * target_solid_final.Width].A > 0)
            return false;
    }

    return true;
}
Viewing all 69 articles
Browse latest View live