I'm learning graphics! (BORING!)
Ok, so for Legacy, I want to be able to programmatically compose 2d art using the same blend modes that Photoshop uses. This allows us to just specify some hex values, and automatically generate all the nicely colored images. I'm talking about stuff like overlay, screen, multiply, hard light, all that.
It turns out that photoshop blend modes are
fairly subtle. In particular, they perform math and logic on the underlying color. This is something that a standard pixel shader can't do, because you just don't know what color the destination is until you get to the fixed function step, after your shader is done. The fixed function step can do some basic addition and multiplication, and that's not enough to give you the fancier stuff like screen, overlay etc..
So what you gotta do is use the underlying layer as an input to the shader that's doing the blend. This is kindof annoying because it means you need two texture targets, and you need to ping-pong between them, and then you also need to do image copying at each step, or do a whole framebuffer pass at each step. But whatever, for Legacy we don't need this in the realtime pipe, so that's fine.
I was gonna build all this myself but then I found this
blog post and shader file that has it all, and it also answered my question about whether the blend logic should be per-component or on a pixel "value." (per-component is the answer.) Sweet. So now I have a little utility class to compose images using photoshop blend styles.
Once you have the rgb value correct, you also need to get the alpha right. This is something that I had always kinda glossed over. I never quite understood how the fixed function blending handled alpha, and why alpha premultiplication is necessary. So I made some bad assumptions. But I figured it out!
If you care about the alpha value in the final image - i.e. you are rendering an intermediate texture that will later be rendered into the scene and it's alpha must be correct - then you cannot use the standard alpha blend function:
source.rgba*source.a + dest.rgba*(1-source.a)
because - although it will get the rgb values right, it will cause your image to be too transparent. To see this, plug in 1.0 for dest.a and 0.5 for source.a. The final a value is (0.5*0.5)+1.0*(1-0.5) = 0.75.
But it should be 1, because the underlying layer is opaque. Basically this blend function treats alpha like a color component, instead of compositing it separately. This is the part I failed to realize, I thought there was special handling of the alpha component, for some reason.
What you need instead is, inside the shader, multiply your rgb components by your alpha component, and then use:
source.rgba*1 + dest.rgba*(1-source.a)
which expands to:
vec4(source.rgb*source.a, source.a) + dest,rgba*(1-source.a)
so, now plugging in 1.0 for dest.a and 0.5 for source.a, we can see that the rgb values are the same as before, but the alpha value is now correctly calculated as 1.0.
So this is why alpha premultiplication is
sometimes important. It's important if you need the alpha values in your final image to be accurate.