Quantcast
Channel: Question and Answer » hlsl
Viewing all articles
Browse latest Browse all 69

Binding an Instance Matrix with an InputLayout

$
0
0

If I want to bind an instance matrix to a vertex shader, how do I go about it?

For example, here is a super-simple VS input:

struct VS_IN {
    float4 Position : POSITION;
    row_major float4x4 WorldMat : INSTANCE_TRANSFORM;
};

How would the corresponding bindings look when creating the ID3D11InputLayout?

I figured it would be something akin to:

D3D11_INPUT_ELEMENT_DESC[] bindings = ...;
bindings[0] = { 
    "POSITION", // SemanticName
    0U, // SemanticIndex
    DXGI_FORMAT::DXGI_FORMAT_R32G32B32A32_FLOAT, // Format
    0U, // InputSlot
    0U, // AlignedByteOffset
    D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_VERTEX_DATA, // InputSlotClass
    0U // InstanceDataStepRate
};
bindings[1] = { 
    "INSTANCE_TRANSFORM", // SemanticName
    0U, // SemanticIndex
    DXGI_FORMAT::DXGI_FORMAT_R32G32B32A32_FLOAT, // Format
    1U, // InputSlot
    0U, // AlignedByteOffset
    D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_INSTANCE_DATA, // InputSlotClass
    1U // InstanceDataStepRate
};
bindings[2] = { 
    "INSTANCE_TRANSFORM", // SemanticName
    0U, // SemanticIndex
    DXGI_FORMAT::DXGI_FORMAT_R32G32B32A32_FLOAT, // Format
    2U, // InputSlot
    0U, // AlignedByteOffset
    D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_INSTANCE_DATA, // InputSlotClass
    1U // InstanceDataStepRate
};
bindings[3] = { 
    "INSTANCE_TRANSFORM", // SemanticName
    0U, // SemanticIndex
    DXGI_FORMAT::DXGI_FORMAT_R32G32B32A32_FLOAT, // Format
    3U, // InputSlot
    0U, // AlignedByteOffset
    D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_INSTANCE_DATA, // InputSlotClass
    1U // InstanceDataStepRate
};
bindings[4] = { 
    "INSTANCE_TRANSFORM", // SemanticName
    0U, // SemanticIndex
    DXGI_FORMAT::DXGI_FORMAT_R32G32B32A32_FLOAT, // Format
    4U, // InputSlot
    0U, // AlignedByteOffset
    D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_INSTANCE_DATA, // InputSlotClass
    1U // InstanceDataStepRate
};

In the example above, I’m increasing the InputSlot value with each ‘row’ of the instance matrix, because I would expect that each row of the matrix would take up the next 128-bit input register.

However, in other examples I’ve seen on the internet, people seem to increase the SemanticIndex with each row instead. I thought that might because they declare the input as four float4 fields instead of a matrix, but then I looked at this question, and that doesn’t seem to be the case.

Anyway, to cut to the point, if I want to use VS_IN in the format I’ve provided, what is the correct way to lay out the input elements?


Viewing all articles
Browse latest Browse all 69

Trending Articles