Page 30 of 81

fragment shader ...

Posted: Wed Feb 11, 2009 11:41 am
by Simon B
The fragment shader starts by pulling info from the plist and the vertex shader ... I have to figure out what is needed:

Code: Select all

// Information from Oolite.
uniform sampler2D      uColorMap; // Diffuse and Specular Intensity map
uniform sampler2D      uFXMap; // Effects & Light Illumination Map
uniform sampler2D      uNormalMap; // Normal Map

const float SpecExponent = 4.0;
uniform vec3   SpecularRGB
uniform float   time;
uniform float   eng_pow;
uniform float   hull_heat;
The vertex shader seems to build a tangent space ... comparing the griff-boa fs to the griff-boa vs suggest I need to get these:

Code: Select all

varying vec2         vTexCoord;
varying vec3         vEyeVector;   // These are all in tangent space
varying vec3         vLight0Vector;
varying vec3         vLight1Vector;
I'm going to need functions to cover the hull heat glow and the engine heat glow (which, between last post and this I decided will be orangy - not cyan - to fit with the diffuse map color. Hull lights etc will be constant.

Code: Select all

// redGlow effect
vec4 redGlow(float level)
{
   vec4 result;
   result.rgb = vec3(1.9, 0.5, 0.2) * level * 2.0;
   result.a = 1.0;
   return result;
}

// EngineGlow effect
vec4 EngineGlow(float level)
{
   vec4 result;
   
   result.rgb = vec3(1.9, 0.5, 0.20) * level;   
   result.a = 1.0;
   
   return result;
}
Now - this bit:

Code: Select all

void Light(in vec3 lightVector, in vec3 normal, in vec4 lightColor, in vec3 eyeVector, 
           in float specExponent, inout vec4 totalDiffuse, inout vec4 totalSpecular)
{
   lightVector = normalize(lightVector);
   vec3 reflection = normalize(-reflect(lightVector, normal));
   
   totalDiffuse += gl_FrontMaterial.diffuse * lightColor * max(dot(normal, lightVector), 0.0);
   totalSpecular += lightColor * pow(max(dot(reflection, eyeVector), 0.0), specExponent);
}
... is a bit dense to me - I don't get it. It looks kinda like the normal-vector calculations in the shady-cobra vertex shader. Anyway, it's used in a #define ...

Code: Select all

#define LIGHT(idx, vector) Light(vector, normal, gl_LightSource[idx].diffuse, eyeVector, SpecExponent, diffuse, specular)
... so I figure this is setting up how the normal map effects get added. I'm a little worried that there will be a clash between the two definitions of SpecExponent but some systems let you pass values like that.

Now should be ready for the main function - starts out initializing what is going to end up as holding the final color/lighting stuff that gets applied, and loads the textures. There's an exrta bit for normal maps which I'm not used to.

Code: Select all

void main()
{
   vec4 diffuse = vec4(0.0), specular = vec4(0);
   vec3 eyeVector = normalize(vEyeVector);
   vec2 texCoord = vTexCoord; // what's this?

   // Load texture data
   vec4 colorMap = texture2D(uColorMap, texCoord);
   vec4 fxMap = texture2D(uFXMap, texCoord);
   vec3 normal = normalize( texture2D(uNormalMap, texCoord).xyz - 0.5);
   normal = normalize(normal);

   float specIntensity = 15.0 * colorMap.a * colorMap.a;
I think that "- 0.5" on the end of the first "normal =" line is some sort of scaling?

Note that the griff boa sets specular intensity to a constant - but I want to use the alpha channel in the texture for this ... which will be fun when I get around to playing with it.

From here I need to add in the effects, and the specular:

Code: Select all

#ifdef OO_LIGHT_0_FIX
   LIGHT(0, normalize(vLight0Vector));
#endif
   LIGHT(1, normalize(vLight1Vector)); // change the 0 to 1 when exporting back to oolite
   
   diffuse += gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
   diffuse += fxMap.a; // use fxmap alpha channel as an illumination map 
   
   vec4 color = diffuse * colorMap;
   color += redGlow(effectsMap.b * Pulse(min(eng_pow, 1.0), 1.0)); // adds red 'engine heat' glow
   color += EngineGlow(effectsMap.r * eng_pow); // adds orange/red exhaust-vent glow
   
// calculate the specular, color it using a weighted sum of diffuse map and imported color 
   color += 3.0 * ((0.7*SpecularRGB)+(0.3*colorMap)) * specular * specIntensity;
   color.a = 1.0;
   
   gl_FragColor = color;
}
In-ter-rest-ting ... the alpha channel in the fx map is used as an illumination map. I've not actually played with this purposely.


Now - apart from syntax errors - which I never see before hitting "send" - I'm sure I've missed something out...

Posted: Wed Feb 11, 2009 11:51 am
by Simon B
MKG wrote:
Simon B wrote:
Me too - just tried... canselect file > import > wavefront - browse to a file - select file - now what? Whatever I do seems to cancel the import.
File > Import > Wavefront > (browse to file) > (select file) ...

... at which point the filename appears in frame below the Directory Selection frame. This is where Blender gets a lot of criticism for having non-standard interface aspects. At the right extreme of the Directory Selection frame is a button (Import Wavefront, I think) which you almost automatically think is the thing you pressed to get to where you are. Wrong! It's the one you press NOW to perform the import.

And all should be well.

Mike
Yep - only I get a dialog of import options ... right up until today, I press "import" and I get an error that the file is not recognized. Earlier today - right up till the last post (mournful bugel starts - catches my look - stops ... it's 00:45hr here what do you expect?) - it returns me to the main screen thing right away, but no model.

Just when I'm finally seeking to explain it to someone else, it works - even imports the texture. I'm playing with the interface to see how to change the viewpoints and so on. Need to know how to apply a shader ... ideas?

Maybe if I assign the normal map (which is what immediately interests me) to the texture in Wings3D, then export it to obj, obj format will store the map info, and blender will pick it up with the diffuse map???

Posted: Wed Feb 11, 2009 11:57 am
by Griff
I think any shaders to reference as a starting point would be the ones Ahruman has posted up, they work properly and don't have any of the clueless tinkering i do in them, for instance in the griff boa vertex shader is can see the lines
if (gl_Vertex.x > 0.0)
TBN = mat3(-t, b, n);

that's where i was trying to flip the tangent on the polygons on one half of the model so i culd mirror the UV's and still have the lighting work correctly - it's not a proper fix and it might not be needed for your ships (depends if you've mirrored the left and right halves of your model in the UV map)
If you want to use any shaders i've mangled (and please feel free to by all means) i think the ones from the normal mapped cobra mainhull will be better, they have more effects supported in them plus another_commander spent a bit of time looking through them to fix a few bugs. Let me know what you want and i'll customise them for you.
off the top of my head, they work like this
colour map RGB channels = the colour texture for shader & non shader mode
colour map A channel = the specular intensity map for shaders mode
Effects map - R channel - the red hot metal around the 'rocket exhaust'
Effects map - B channel - the cyan coloured exhaust glow
Effects map - G channel - Mask 1 for the re-painting effect
Effects map - A channel - Mask 2 for the re-painting effect
Normal Map -RGB Channels - Used for normal Mapping
Normal Map -A Channel - Illumination map for the hull lights (including the faked spill light from the cockpit window)
Decal Map - RGBA Channels, 4 different decals in each channel arranged horizontally in the image

Posted: Wed Feb 11, 2009 12:29 pm
by Simon B
Griff wrote:
I think any shaders to reference as a starting point would be the ones Ahruman has posted up, they work properly and don't have any of the clueless tinkering i do in them, for instance in the griff boa vertex shader is can see the lines
if (gl_Vertex.x > 0.0)
TBN = mat3(-t, b, n);
I didn't use that one <checks> nope - just left it out when I figured out what it did.

I've found that sorting through your fiddles helps toward final illumination - or is that a migrane? (They are soo similar...)

I got the brick-wall demo too (jic) but my main concern was to discover where adding a normal map changes the shader style I'm familiar with... so I've been comparing with those I did (with help <bows>) for the arachnid.

Comparing Ahruman's stuff with yours worked then too.

Hmmm... I see already that I missed out the Pulse function. That should survive unchanged? <sigh>

At the end it boils down to how much work I'm prepared to do. You'll see from the gecko how I intend to do most of the normal maps (in terms of style). I don't think I really want to be producing four textures for each ship... thou-oughhh I could... no! Welllll... no!

..... anyone of a less technical persuasion who wades through all that code I just generated should gain an insight into how us techies manage these things. The main difference between a tekkie and a non-tek is that the tekkies are not so bothered when they don't understand something.

For the record - a few months ago I'd never heard of shaders.

Posted: Wed Feb 11, 2009 12:36 pm
by Selezen
Hey, I'll quite happily hand over the Dream Team Member's ID to Simon and Griff. They have outdone anything I could even contemplate at this time.

I was trying to keep the DT versions as low poly as possible and was refining the texturing concepts as I went, so they're not as polished as they could be. Simon's speed at churning these out is far more than I could ever achieve and he has got closer to a complete set (even BEYOND a complete set) than I have a chance of getting to in the foreseeable future.

Go for it, I say.

Posted: Wed Feb 11, 2009 1:58 pm
by Simon B
Now happily importing obj files to blender - still dunno how to add in the shaders. Found a panels > shaders > textures (I think) which let me add a normal map in the world dialog ... nothing done to the model.

Found myself musing: surely modern cylon kittens look just like regular kittens these days, apart from a kind of sadistic cuteness ... you can't tell from the eyes any more. Not even with a <sigh> cat scan ... right, that's it: time for bed!

Posted: Wed Feb 11, 2009 7:29 pm
by JensAyton
Simon B wrote:
@Ahruman - following prev. suggestions - this involves generating four maps. a high-res/detail RGB for the non-shaded texture, a low res (poss low detail) RGBA for the shaded texture, a low-res-low-detail RGB effects map and a high-res/detail RGB(A?) normal map.

But why not use the non-shaded texture with an alpha channel as the shaded texture as well? There may be conflicts with the pseudo-bumps but that doesn't seem too bad in that gecko? Anyway...
One, because mixing real details and fake details tends to look bad. Two, because you might want to save some memory.
I notice that the griff-boa vertex shader is much more complicated. Mostly around tangent space needed for the normal maps ... so I guessi t must be important.
Yes, it is. :-)
Griff’s vertex shader is similar to the new default vertex shader for 1.73. Doing that stuff in the vertex shader saves you doing it in the fragment shader, which is good because you tend to get thousands of fragments per vertex.

Code: Select all

const float SpecExponent = 4.0;
uniform vec3   SpecularRGB
uniform float   time;
uniform float   eng_pow;
uniform float   hull_heat;
You’re missing a semicolon there.
Now - this bit:

Code: Select all

void Light(in vec3 lightVector, in vec3 normal, in vec4 lightColor, in vec3 eyeVector, 
           in float specExponent, inout vec4 totalDiffuse, inout vec4 totalSpecular)
{
   lightVector = normalize(lightVector);
   vec3 reflection = normalize(-reflect(lightVector, normal));
   
   totalDiffuse += gl_FrontMaterial.diffuse * lightColor * max(dot(normal, lightVector), 0.0);
   totalSpecular += lightColor * pow(max(dot(reflection, eyeVector), 0.0), specExponent);
}
... is a bit dense to me - I don't get it.
Um. How much do you know about optics? :-)

This is the complete lighting calculation for one light. The totalDiffuse +=… line implements Lambert’s cosine law. It relies on the fact that the dot product of two normalized vectors is equal to the cosine of the angle between them. (See WP: Lambertian reflectance.) The max() filters out negative values, and multiplying by the material colour and light colour should be pretty obvious.

The totalSpecular +=… line implements the Blinn-Phong shading model.

This function doesn’t actually know anything about normal mapping per se. All lighting functions require a surface normal; in a normal-mapping shader, this is taken from the normal map. Without normal mapping it is generally linearly interpolated across the surface by the GPU. In this case, since we’re working in tangent space, we’ve transformed everything else (that is, the eye vector and light vectors) in the vertex shader so that the interpolated normal is always (0, 0, 1), so we can use the value from the normal map directly without additional transformations.
I'm a little worried that there will be a clash between the two definitions of SpecExponent but some systems let you pass values like that.
What two definitions?

GLSL is a pass-by-value language, like all C-oids.
Now should be ready for the main function - starts out initializing what is going to end up as holding the final color/lighting stuff that gets applied, and loads the textures. There's an exrta bit for normal maps which I'm not used to.

Code: Select all

void main()
{
   vec4 diffuse = vec4(0.0), specular = vec4(0);
   vec3 eyeVector = normalize(vEyeVector);
   vec2 texCoord = vTexCoord; // what's this?
That variable appears to be pointless in Griff’s case. It is necessary for parallax mapping, since that involves distorting the texture coordinate.

Code: Select all

   vec3 normal = normalize( texture2D(uNormalMap, texCoord).xyz - 0.5);
   normal = normalize(normal);
I think that "- 0.5" on the end of the first "normal =" line is some sort of scaling?
It’s technic’ly an offset :-). Texture components range from 0 to 1, but the normal vector’s components range from -1 to 1. Subtracting a number from a vector is sugar for subtracting from each component (i.e. it’s equivalent to …- vec3(0.5, 0.5, 0.5) in this case). Scaling by 2 would be redundant because of the normalization. Normalizing twice as in this code is also redundant.
Found myself musing: surely modern cylon kittens look just like regular kittens these days, apart from a kind of sadistic cuteness ... you can't tell from the eyes any more. Not even with a <sigh> cat scan ... right, that's it: time for bed!
That’s not a cylon, that’s a KITTen.

Posted: Wed Feb 11, 2009 10:24 pm
by Simon B
Cool:
Ahruman wrote:
Simon B wrote:
@Ahruman - [snip] why not use the non-shaded texture with an alpha channel as the shaded texture as well?
One, because mixing real details and fake details tends to look bad. Two, because you might want to save some memory.
wot I said then - hough adding an extra map does not save memory - unless you are thinking of ram used while rendering?

Code: Select all

const float SpecExponent = 4.0;
uniform vec3   SpecularRGB
You’re missing a semicolon there.
Ta.
Now - this bit:[snip]... is a bit dense to me - I don't get it.
Um. How much do you know about optics? :-)
I used to lecture in it - though the advanced stuff left ray-optics completely... and it weren't my speciality.

The programmers tendency to use transpose vectors and matrixes makes the math look odd to me.

This is the complete lighting calculation for one light.
That part I got from the role it plays - I was more confused by the way it seems to pull it's parts out of the air.
The totalDiffuse +=… line implements Lambert’s cosine law.
Though this is interesting anyway :)
It relies on the fact that the dot product of two normalized vectors is equal to the cosine of the angle between them.
a.b = |a||b|cos(A) - but |a| = |b| = 1 ... OK...

So the lightVector is the incedent ray, the normal is just that - the (outward) normal to the surface(?) "reflect" is a built in function to do reflections using this sort of info (why is it negative?)

OK - it's falling into place now ta.
In this case, since we’re working in tangent space, we’ve transformed everything else (that is, the eye vector and light vectors) in the vertex shader so that the interpolated normal is always (0, 0, 1), so we can use the value from the normal map directly without additional transformations.
That's neat - and explains my confusion - the hard work is done elsewhere.
I'm a little worried that there will be a clash between the two definitions of SpecExponent but some systems let you pass values like that.
What two definitions?
Declarations? The name is used right at the start, and as the name of an input to a function. I had just noticed that Griff was careful to give them different names... perhaps for clarity.
Now should be ready for the main function - starts out initializing what is going to end up as holding the final color/lighting stuff that gets applied, and loads the textures. There's an exrta bit for normal maps which I'm not used to.

Code: Select all

void main()
{
   vec4 diffuse = vec4(0.0), specular = vec4(0);
   vec3 eyeVector = normalize(vEyeVector);
   vec2 texCoord = vTexCoord; // what's this?
That variable appears to be pointless in Griff’s case. It is necessary for parallax mapping, since that involves distorting the texture coordinate.
OK... though: could the migrating squares effect be a parrallax issue?

Code: Select all

   vec3 normal = normalize( texture2D(uNormalMap, texCoord).xyz - 0.5);
   normal = normalize(normal);
I think that "- 0.5" on the end of the first "normal =" line is some sort of scaling?
It’s technic’ly an offset :-). Texture components range from 0 to 1, but the normal vector’s components range from -1 to 1. Subtracting a number from a vector is sugar for subtracting from each component (i.e. it’s equivalent to …- vec3(0.5, 0.5, 0.5) in this case). Scaling by 2 would be redundant because of the normalization. Normalizing twice as in this code is also redundant.
I was wondering if the 0.5 should have been outside the brackets - which would need the second normalization. You explanation says otherwise: no point normalizing the texture before offsetting the indexes
Found myself musing: surely modern cylon kittens look just like regular kittens these days, apart from a kind of sadistic cuteness ... you can't tell from the eyes any more. Not even with a <sigh> cat scan ... right, that's it: time for bed!
That’s not a cylon, that’s a KITTen.
Oh no...

Y'know - I spent a while trying to get the flashers to cylon, but realised that it would probably be more effective using a variable glow function in the shader.

Posted: Wed Feb 11, 2009 10:39 pm
by JensAyton
Simon B wrote:
]wot I said then - hough adding an extra map does not save memory - unless you are thinking of ram used while rendering?
Yes of course I’m thinking of RAM used when rendering. Unloaded textures on disk are much less of a problem.
So the lightVector is the incedent ray, the normal is just that - the (outward) normal to the surface(?)
Yes.
"reflect" is a built in function to do reflections using this sort of info
Yes. (It’s defined [p. 60] as reflect(I, N) = I - 2 * dot(N, I) * N.)
(why is it negative?)
I’m not sure. :-) Off the top of my head, it looks like the light vector is actually the wrong way around. This is probably inherited from wherever I gakked the code for the old Shady Cobra example from…
I'm a little worried that there will be a clash between the two definitions of SpecExponent but some systems let you pass values like that.
What two definitions?
Declarations? The name is used right at the start, and as the name of an input to a function. I had just noticed that Griff was careful to give them different names... perhaps for clarity.
Ahh. Function parameters and local variables shadow global variables, so it’s well-defined. Even so, it can lead to confusion, which is why I use sigils in my shaders.
OK... though: could the migrating squares effect be a parrallax issue?
Sorry, you’ve lost me again.

Posted: Thu Feb 12, 2009 12:59 am
by Simon B
Hmmm... the negative in the reflection is to reverse the direction of one of the vectors - stet. Suspect that definition expects the three to fan out, while the lightVector will point inwards.
Ahruman wrote:
]OK... though: could the migrating squares effect be a parrallax issue?
Sorry, you’ve lost me again.
Go back to where griff posted the gecko animation... watch the squares.

Could I be looking at drifting parralax?

Meanwhile back in blenderville - have not been able to got blender to import a normal map with the mesh. Heard that blender can import wings files - just not mine for no good reason. Currently wading through the faqs and stickies on the blender-artists forums...

If I could get a custom mesh into GIMPs 3D preview window - that would be ideal. Been submitted as a feature request in the plugin.

Posted: Thu Feb 12, 2009 1:29 am
by Simon B
Logs:

Code: Select all

[shader.compile.fragment.failure]: ***** GLSL fragment shader compilation failed for neolite-std.fs:
>>>>> GLSL log:
(96) : error C0107: Too many arguments to macro LIGHT
(96) : error C1115: unable to find compatible overloaded function "reflect(vec3, error)"
(96) : error C1115: unable to find compatible overloaded function "normalize(error)"
(96) : error C1115: unable to find compatible overloaded function "dot(error, vec3)"
(96) : error C1115: unable to find compatible overloaded function "max(error, float)"
...huh?

Are not those functions built-in? I which case, this suggests I need to update oolite.

Posted: Thu Feb 12, 2009 6:12 am
by Simon B
The method used to produce the oldships.oxp set was pretty formulaic - only the cat and salamander do not use the formula.

"What formula?" I hear me ask ... why, this one of course.

If you want to use this set but worry that your fav ship will clash (or you have an oxp you want to modify to fit in... or you just want to be able to model: have a look.)

Posted: Thu Feb 12, 2009 8:39 am
by Griff
Simon B wrote:
Go back to where griff posted the gecko animation... watch the squares.

Could I be looking at drifting parralax?
I think this is just an effect of the model rotating in relation to the light direction, specifically when the light swaps from coming from the right to coming from the left. to make the animation test i just stuffed the gecko and it's textures into the cobra 3 rendermonkey scene i have set up on my pc - there's no parallax support in that version of the shader.

i think to get the effect you're after - in the greyscale image you use to generate the final normal map from, paint the 'squares' as solid blocks of colour rather than 2-3 pixel thick outlines of the squares, at the moment the 'bump' is more of an indented edge around the 'squares' in the texture rather than a sunken square panel

Posted: Thu Feb 12, 2009 9:37 am
by JensAyton
Simon B wrote:
Logs:

Code: Select all

[shader.compile.fragment.failure]: ***** GLSL fragment shader compilation failed for neolite-std.fs:
>>>>> GLSL log:
(96) : error C0107: Too many arguments to macro LIGHT
(96) : error C1115: unable to find compatible overloaded function "reflect(vec3, error)"
(96) : error C1115: unable to find compatible overloaded function "normalize(error)"
(96) : error C1115: unable to find compatible overloaded function "dot(error, vec3)"
(96) : error C1115: unable to find compatible overloaded function "max(error, float)"
...huh?

Are not those functions built-in? I which case, this suggests I need to update oolite.
They are build in, but they’re “overloaded” in that they can use various different parameter combinations – there’s a reflect(vec3, vec3), a reflect(vec4, vec4), and a reflect(vec2, vec2). Here it suggests it’s looking for “reflect(vec3, error)”; error isn’t a real type, but presumably indicates that the compiler has a problem with the second parameter (and also a problem generating sensible diagnostics).

Assuming you haven’t changed the code above, it has a problem with “normal” for some reason, causing reflect(lightVector, normal) (in Light()) to be treated as “reflect(vec3, error)”. Because this is itself an error, it’s being treated as an expression of type error, so the normalize call is being treated as normalize(error). In the same way, the dot(normal, lightVector) is being treated as an error because there’s a problem with normal, and this cascades to the enclosing max() expression.

Without seeing the current full code, I can’t really say why it’s doing this, but with the “ Too many arguments to macro LIGHT” bit a possible reason would be that you’ve deleted the normal parameter from Light()’s argument list.

Posted: Thu Feb 12, 2009 1:25 pm
by DaddyHoggy
I don't understand hardly any of this and yet find it completely compelling at the same time...