Sunday, March 20, 2016

Everything Old is New Again!

This GDC seemed to be more productive than most in recent memory, despite spending most of it in meetings, as per usual.  Among many other useful things, was Dan Baker's talk on Object Space Shading (OSS) during the D3D Dev Day.  I think it's probably a safe bet that this talk will end up getting cited a lot over the next couple years.  The basic jist of it was:
- Ensure all your objects can be charted.
- Build a list of visible objects.
- Attempt to atlas all your visible charts together into a texture page, allocating them space proportional to their projected screen area.
- Assuming you've got world-space normals, PBR Data, and position maps, light into this atlased texture page, outputting final view-dependent lit results.
- Generate a couple mips to handle varying lookup gradients.
- Now render the screen-space image, with each object simply doing a lookup into the atlas.

Pretty elegant.  Lots of advantages in this:
- essentially you keep lighting costs fixed (proportional to the super-chart's resolution)
- your really expensive material and lighting work doesn't contend with small triangle issues.
- all your filtering while lighting and shading is brutally cheap.
- because things are done in object space, even if you undersample things, should look "okay".

Now, as suggested there are a few issues that I think are mostly ignore-able due to the fact that this was designed around RTSs - I think in a general case you'd really want to be a lot more careful about how things get atlassed as I think something like an FPS can fairly easily create pathological cases that "break" if you're just using projected area.  For example, imagine looking at a row of human characters forming a Shiva-pose, by which I mean, standing one behind the other with their arms at different positions.  Of course, this doesn't really break anything, but it does mean you're likely to oversubscribe the atlas and have quality bounce around depending on what's going on.  Even so, still pretty interesting to play around with.

So, I'm going to propose a different way to think about this which is actually in many ways so obvious I'm surprised more people didn't bring it up when we excitedly discussed OSS - lightmapping.  Ironically this is something people have been trying to get away from, but I guess I'll propose a way to restate the problem:

Consider that not everything necessarily needs the same quality lighting, or needs to be updated every frame.  So let's start by considering that we could maybe build three atlases - one for things that need updating every frame, and two low frequency atlases, which we update on alternating frames.  Now if we assume we're outputting final lighting this might be a bit problematic because specularity is obviously view dependent and changing our view doesn't change the highlight.

Okay, so what if we don't output final shading but instead light directly into some kind of lighting basis?  For example, the HalfLife-2 basis (Radiosity Normal Maps or RNMs), or maybe Spherical Gaussians as demo'd by Ready At Dawn.  Now obviously your specular highlights will no longer be accurate as you're picking artificial local light directions.
As well, RNMs and much of their ilk tend to be defined in tangent-space, not world-space, so that's somewhat less convenient, as instead of needing to provide your lighting engine with just a normal, you actually need the tangent basis instead, so you can rotate the basis into world-space before accumulating coefficients.  But its been demonstrated by Farcry4 you could do this by encoding a quaternion in 32 bits, so hardly impossible.  And FWIW, RNMs tend to be fairly compressible (6 coefficients, using an average color, are typically fine).
Anyway, storing things in a basis in this way provides a number of interesting advantages that should be pretty familiar:
- Lighting is independent of material state/model/BRDF.  You don't need the albedo, metalness, roughness, etc.  This means that in cases where your materials are animating, you can still provide the appearance of high frequency material updates.  You can still have entirely different lighting models from object to object if you so choose.  Because of this, all you need to initially provide to the lighting system is the tangent basis and world-space position of each corresponding texel.  Your BRDF itself doesn't matter for building up the basis, so you can essentially do your BRDF evaluation when you read back the basis in a later phase (probably final shading).  This is analogous to, say, lighting into an SH basis where you simply project the lights into SH and sum them up - the SH basis texel density can be pretty sparse while still providing nice looking lighting results so long as your lighting variation frequency is proportional to less than the lighting density.  Of course, specularity can be problematic depending on what you're trying to do, but more on that below.
- As said, lighting spatial frequency doesn't have to be anywhere near shading frequency and can be considerably lower as lerping most lighting bases tends to produce nice results without affecting the final lighting quality significantly (with the exception of shadowing, typically).
- Specular highlights, while inaccurate due to compressing lighting into a basis, can properly respond to SpecPower changes quite nicely.  There's also nothing stopping you from still using reflections as one normally would during the shading phase.  Lots of common lightmap+reflection tricks could be exploited here as well.  If you end up only needing diffuse for some reason, SH should be adequate (so long as you still cull backfacing lights), and would remove the tangent-space storage requirements - though you'd need to track vertex normals.
- There's no law that says *all* lighting needs to be done uniformly in the same way.  You could do this as a distinctly separate pass that feeds the pass that Baker described, or process them in the forward pass should the need arise.

And last but not least, if you're moving lighting into a basis like this, there's no rule that says you need to update everything every frame.  So for example, you could partition things into update-frequency-oriented groups and update based on your importance score.  This would also allow for your light caching to be a little more involved as you could now keep things around a lot longer (in a more general LRU cache).  For example, you could have very very low res lighting charts per object, all atlased together into one big page that's built on level load as a fallback if something suddenly comes into view, loose bounce determination, or distant LODs.  You could even PRT the whole thing into a super-atlas, assign each object a page, and treat the whole thing as a cache that you only update as needed!

Anyway, just some ideas I've been playing around with that I figured I'd share with everyone.


Thursday, November 15, 2012

Dynamic Lighting in Mortal Kombat vs DC Universe

MKvsDC (as the title is colloquially referred to at NetherRealm) was our very first foray into the X360/PS3 generation of hardware. As a result, it featured a lot of experimentation and a lot of exploration to figure out what might and might not work. Unlike our more recent fighters, Mortal Kombat (2011) and Injustice: Gods Among Us (which is still getting completed and looking pretty slick so far), MKvsDC was actually a full-on 3D fighter in the vein of our last-gen (PS2/GameCube/Xbox1) efforts. (Note, once again, everything here has been explained publicly before, just in a bit less detail).

A lot of figuring out how to get things going in MKvsDC was trying to figure out what was doable and at the same time acceptable. The game had to run at 60Hz, but was both being built inside an essentially 30Hz targeted engine and had to look on par with 30Hz efforts. I remember quite early on in development Tim Sweeney and Mark Rein came out to visit Midway Chicago. I remember going for lunch with the Epic crew, myself spending the majority of it speaking with Tim (extremely nice and smart guy) about what I was going to try to do to Unreal Engine 3 to achieve our visual and performance goals. Epic was very up front and honest with us about how UE3 was not designed to be optimal for a game like ours. To paraphrase Sweeney, since this was years ago, "You're going to try to get UE3 to run at 60Hz? That's going to be difficult."

He wasn't wrong. At the time, UE3 was pretty much a multipass oriented engine (I believe it still theoretically is if you're not light baking, using Dominant Lights, or using Lighting Environments, though no one would ship a game that way). Back then there were still a lot of somewhat wishy-washy ideas in the engine like baking PRT maps for dynamic objects, which ended up largely co-opted to build differential normal maps. Lots of interesting experimental ideas in there at the time, but most of those were not terribly practical for us.

So Item #1 on my agenda was figuring out how to light things cheaper, but if possible, also better. Multipass lighting was out - the overhead of multipass was simply too high (reskinning the character per light? I don't think so!). I realize I'm taking this for granted, but I probably shouldn't - consider that one of the obvious calls early on was to bake lighting whereever possible. Clearly this biases towards more static scenes.

Anyhoo, we had really two different problems to solve. The first was how were we going to light the characters. The fighters are the showcase of the game, so they had to look good, of course. The second problem was how were we going to handle the desire for dynamic lighting being cast by the various effects the fighters could throw off. We handled it in the previous gen, so there was a team expectation that it would somehow be available "as clearly we can do even more, now".

So, the first idea was something I had briefly played around with on PS2 - using spherical harmonics to coalesce lights, and then light with the SH's directly. Somewhat trivially obvious now, it was a bit "whackadoo" at the time. The basics of the solution were already rudimentarally there with the inclusion of Lighting Environments (even if the original implementation wasn't entirely perfect at the time). Except instead of extracting a sky-light and a directional as Epic did, we would attempt to just directly sample the SH via the world-space normal.

This worked great, actually. Diffuse results were really nice, and relatively cheap for an arbitrary number of lights (provided we could safely assume these were at infinity). Specularity was another matter. Using the reflection vector to lookup into the solution was both too expensive and of dubious quality. It somewhat worked, but it didn't exactly look great.

So after playing around with some stuff, and wracking my brain a little, I came up with a hack that worked pretty decently given that we were specifically lighting characters with it. In essence, we would take the diffuse lighting result, use that as the local light color, and then multiply that against the power-scaled dot between the normal and eye vector. This was very simple, and not physically correct at all, but surprisingly it worked quite nicely and was extremely cheap to evaluate.

But, then Steve (the art director) and Ed came to me asking if there was anything we could do to make the characters pop a little more. Could we add a rim-lighting effect of some kind? So, again seeking something cheap, I tried a few things, but the easiest and cheapest thing seemed to be taking the same EdotN term and playing with it. The solution I went with was basically something like (going from memory here):

_Clip = 1 -(E dot N);
RimResult = pow(_Clip, Falloff)*(N dot D) * (_Clip > Threshold)

Where E is the eye/view vector, N the world-space normal, D a view-perpendicular vector representing the side we want the rim to show up on, and Falloff how sharp we want the highlight to seem. Using those terms provided a nice effect. Some additional screwing around discovered the last part - the thresholding.

This allowed for some truly great stuff. Basically this hard thresholds the falloff of the rim effect. So, this allows for, when falloff is high and threshold is as well, a sharp highlight with a sharp edge to it, which is what Steve wanted. Yet, if you played with it a bit other weirder things were possible, too. If you dropped the falloff so it appeared more gradual and yet hard thresholded early this gave a strange cut-off effect that looked reminiscent of metal/chrome mapping!

To further enhance all of this, for coloring I would take the 0th coefficients from the gathered SH set and multiply the rim value by that, which gave it an environmental coloring that changed as the character moved around. This proved so effective that initially people were asking how I could afford environment mapping in the game. All from a simple hard thresholding hack. In fact, all the metal effects on characters in MKvsDC are simulated by doing this.

Okay, so characters were covered. But environment lighting... yeesh. Once again, multipass wasn't really practical. And I knew I wanted a solution that would scale, and not drag the game into the performance gutter when people started spamming fireballs (or their ilk). What to do....

So it turned out that well, these fireball-type effect lights rarely hung around for very long. They were fast, and moved around a lot. And yet, I knew for character intros, fatalities and victory sequences simply adding them into the per-object SH set wouldn't prove worthwhile, because we'd want local lighting from them. Hmm...

So, I ended up implementing two different approaches - one for characters and a second for environments. For characters, I did the lighting in the vertex shader, hardcoding for 3 active point lights, and outputting the result as a single color interpolator into the pixel shader. This was then simply added in as an ambient factor into the diffuse lighting. As these lights tended to be crazy-over bright, the washing out of per-pixel detail didn't tend to matter anyway.

Environments were more challenging though. As tessellation of the world tended to be less consistent or predictable, the three lights were diffuse-only evaluated either per-pixel or per-vertex (artist selectable). When using per-vertex results, again a single color was passed through, but modulated against the local per-pixel normal's Y component (to fake a quasi ambient occlusion). This worked well enough most of the time, but not always. If you look carefully you can see the detail wash out of some things as they light up.

To keep track of the effect lights themselves, a special subclass of light was added that ignored light flags, and that were managed directly in a 3-deep FIFO so that the designers could spawn lights at will without having to worry about managing them. When dumped out of the end of the FIFO lights wouldn't actually be deleted of course, merely re-purposed and given new values. Extinguished lights were given the color black so they'd stop showing up. Objects and materials had to opt in to accepting dynamic lighting for it to show up, but anything that did was always evaluating all 3 lights whether you could see them or not.

Ironically about 3 months before shipping I accidentally turned off the effect lights showing up on characters and didn't realize until the very last minute when it was pointed out by QA (switched back on at the 11th hour!), which is why you'll find very few screenshots online created by our team with obvious effect lights showing up on the fighters. Oops!

Sunday, November 4, 2012

Handling Shadows in Mortal Kombat (2011)

So I haven't posted anything in... well, so long this blog is mostly defunct. But I figured it'd be worth posting something again. I can't really talk about Injustice: Gods Among Us's tech until we're done, and most likely shipped it. There's some cool graphics tech in there I'd love to go into... but it's too early. Ideally a GDC-like forum (though I won't be presenting at GDC... too turned off by my experience trying to submit something to GDC11) would be the best place. We'll see. Note, very little of what's described below applies to Injustice. FWIW I've talked about this publicly before (at Sony DevCon I think), but this explains things a bit more thoroughly.

I figured I'd talk a little about how shadows were dealt with in Mortal Kombat (aka MK9). The key thing to keep in mind is that in MK9 the performance envelope that could be dedicated to shadows was extremely limited. And yet, there were some conflicting goals of having complex looking shadow/light interaction. For example, our Goro's Lair background features a number of flickering candles casting shadows on walls, along with the typical grounding shadow one expects.

Goro's Lair concept art

So, what did we do? We cheat. A lot. But everyone who can cheat in game graphics *should* cheat, or imho, you're doing it wrong. Rendering for games is all about cheating (or to be better about wording - being clever). It's all about being plausible, not correct.

The key to keeping lighting under control in Mortal Kombat is rethinking light management. Unlike a lot of games, Mortal Kombat has the distinct advantage of a very (mostly) controlled camera and view. There are hard limits to where are shadow casters can go, what our lights do and therefore where shadows might appear. Thus, unlike probably every other engine ever made, we do NOT compute caster/receiver interaction at runtime. Instead, we require artists to explicitly define these relationships. This means that in any given environment we always know where shadows might show up, we know which lights cast shadows, we know which objects are doing the casting.

So, I'm not a total jerk in how this exposed to the artists. They don't have to directly tie the shadow casting light to the surface. Instead they mark up surfaces for their ability to receive shadows and what kind of shadows the surface can receive. Surfaces can receive the ground shadow, a spotlight shadow, or both. You want two spotlights to shadow-overlap? Sorry, not supported. Redesign your lighting. This might sound bad, but it ends up with shadows not being "locally concentrated" and spread across the level. More on this shortly...

As everything receiving shadows in MK9 is at least partially prelit, shadows always down-modulate. All shadows cast by a single light source are presumed to collate into a single shadow-map, and all "maps" actually occupy a single shadow texture. So we can practically handle four shadows simultaneously being constructed - the overhead of building the maps starts to get crazy. And at a certain point we're risking quality to the point where it's likely not worth it. As we only have a few characters on-screen casting shadows, at a certain point its just not worth the trouble, either.

So, in a level like Goro's Lair in MK9 we can have a number of flickering candle shadows on the walls, giving it a really dramatic spooky look, especially when combined with the nice wet-wall shader work by the environment team. We can handle this cost by knowing explicitly that only a max of four shadows ever update, and that the local pixel shader for any given wall section only has to handle a given specific spot shadow projection. This allows the artists to properly balance the level's performance costs and ensure they stay within budget (which for shadow construction is absurdly low).

For texture projection (aka gobos in our engine's terminology) the same logic applies. You can get either subtractive gobos (say, the leaves in Living Forest) or additive ones (the stained glass in Temple) in a level, but not both. You can have multiple gobos in a level, but only one can hit a particular object at a time. Objects explicitly markup that they can receive the gobos, and then even pick how complex the light interaction is expected to be to keep shader cost under control (does the additive gobo contribute to the lighting equation as a directional light or merely as ambient lighting).

Concept art for The Temple

The gobos themselves can't be animated - they're not Light Functions in Epic-speak. Light Functions are largely unworkable, as they're too open-ended cost wise - the extra pass per-pixel is too expensive (MK demands each pixel is touched only once to the point of no Z-prepass). And hey, they're generally *extremely* rare to see in UE3 titles, even by Epic, for good reason. But, we can fake some complex looking effects by allowing artists to animate the gobo location, or switch between active gobos. Flying these gobo casters around is how we animate the dragon-chase sequence found in RoofTop-Day, which ended up being quite clever.

But that's the point really. The difference between building a game and building an engine is figuring out clever uses of tech, not trying to solve open-ended problems. The key is always to make it look like you're doing a whole lot more than you actually are. If you're going to spend the rendering time on an effect, make sure its an obvious and dramatic one.

Thursday, March 1, 2012

A quick note to those who might see this - Myself and a colleague (the ever brilliant Gavin Freyberg) will be giving a talk at the Epic booth during GDC'12. I've been told its Wednesday around 11:45ish. The talk will be reviewing a variety of things we've been doing related to 60Hz - mostly covering work done in MK9 (aka Mortal Kombat 2011), but a small smattering of info on some of the more recent stuff our teams been up to on our next game. A good chunk of the MK9 info is stuff we really haven't talked in any kind of detail about before, so it might be of interest to some.

Nothing about our next game itself of course - that's PR's job for when we eventually announce it.

Sunday, March 21, 2010


Okay, time for a minor pet-peeve. If you're going to include a DVD with a book that includes sample implementations of concepts you MUST make it a requirement that people include some kind of sample for EVERY CHAPTER. I realize that's an additional burden on the author, and the editor might go through hell trying to wrangle all those chapters. Oh, and no doubt there are concerns about code quality, and lots of experimental code tends to be a little spaghetti-ish. But frankly I'll take anything if it can show me a sample implementation I can just quickly rip apart and get to the meat of.
Implementation details are often left out, or left unclear, or just left as exercises for the reader to discover. Given the purpose of a "Gems"-style book - which is to provide a reader with insight into *implementation* details and not simply to get some idea published - it's important that the reader can walk away with as clear an understanding as possible.

Thursday, March 4, 2010

God of War 3

Haven't posted in a thousand years, but figured I had something I could talk about that was worth sharing finally. Finally got a chance to try out the God of War 3 E3 demo and noticed a couple things while playing the demo for a few minutes. I could easily be wrong about this, but I think I figured out a little of what they're doing in the game to deal with real-time lighting. All of this is conjecture on my part, and like I said, this is based on maybe 5 minutes of goofing around with the demo.

The environment is prelit. The environment can also be dynamically lit by Kratos's actions. But, interestingly, not both at the same time on the same triangle. This is easy to see if you study the specular highlights as the dynamic lighting dies off. So here's what I think they're doing (more or less).
- They're on a per-vertex basis categorizing whether a triangle is within the range of attenuated brightness of the dynamic lights. If it is, any vert outside of the range is set to brightness of zero. Add that triangle to a list for the particular object for the dynamic lighting path.
- If the triangle is entirely outside the attenuated brightness range add it to a list for the particular object to be processed through prelighting.
- Normally categorizing things like this per-triangle might seem rather dangerously expensive and strange, but if you're routing everything through the SPU (which as a PS3 exclusive they sure as hell are) it wouldn't really be that big of a deal.

The reason I assume all of this is it appears that when dynamic lighting appears, prelighting disappears. The fade rate of the transition between the two also appears to be along poly - not pixel - boundaries. Doing this would have a few obvious and huge advantages:
1) it's cheap! Any surface undergoing dynamic lighting is not going through the prelighting path. This means that they can be balanced out pretty well against each other as the complexity of some per-pixel prelit solution (say, Valve RNM hemispherical basis, or typical SH stuff) is going to be pretty equivilent to a handful of per-pixel point lights. So your worst case scenario is quite well structured!
2) it looks more dramatic. I did something not dissimilar to this back in some of the last-gen MK titles when using effect lights on characters. I found that throwing effect lights, given hardware T&L fixed function per-vert lighting, would mean that lighting would very quickly saturate to white. This would make the effect lights hard to see in brighter environments, or when the characters happened to be full on in light. Solution - dim the standard lighting on the character inversely proportional to the attenuation of the effect lights. So essentially, as the effect light gets closer, the real lighting fades off making the effect lights much more dramatic and powerful. Worked brilliantly (heh, literally) and make the effect lighting really pop.


Thursday, August 13, 2009

Has someone tried this before?

Okay, so I'll be describing a simple approach that I haven't seen being pushed around (I should admit, it bears some pretty obvious similarities to Peter Pike Sloan's et al's Image-Based Proxy Accumulation for Real-Time Soft Global Illumination, though I think my suggested method is a lot simpler. There's been a lot of talk about deferred lighting solutions to renderers. Most of these solutions have one big thing in common: they require a per-pixel screenspace normal map and specular exponent (or some other fancy BRDFish properties) to have been written out in advance. Not a huge limitation, but a real one nonetheless. So what follows removes that limitation:

a) do a standard Z-prepass
b) allocate a 3-deep MRT color buffer, init to black
c) now evaluate all light sources as one normally might for any standard deferred lighting solution, except we add to the color buffers view-space relative Spherical Harmonic representations of the light's irradiance relative to the current pixel.

It's implicit, but to make it explicit and state it outright, you're writing a very simple 3-color 4 coefficient SH into the buffer. Or, alternatively one might choose a hemispherical basis that needs fewer coefficients, but there are good reasons to stick with a spherical one (primarily that you can handle arbitrary reflection vectors).

So, why bother with this? Here are a few interesting "wins".
1) lighting is entirely decoupled from all other properties of the framebuffer - even local normals. Lighting can be evaluated separately when we get to shading using a lighting model that changes relative to the shading.

2) since lighting is typically low frequency in scenes, you can almost certainly get away with constructing the buffer at a resolution lower than the real frame buffer. In fact, since the lighting is independent of things like normal discontinuities, you might even be able to get away with ignoring edge discontinuities. Or you could probably work around this using an ID buffer constructed later, similar to how Inferred lighting works to multisample the SH triplet (thought that seems pretty expensive) and choose the best sample.

3) To me this is really the biggest win of all - because this is decoupled from the need for a G-buffer or any actual properties at the pixel other than its location in space, you could really start doing this work immediately after a prepass has begun! This is becoming more and more important going forward, as getting more and more stuff independent means its easier to break things into separate jobs and distribute across processing elements. In this case you could actually subdivide the work and have the GPU and CPU/SPU split the work in a fairly simple way, and its almost the perfect SPU-type task as you don't need any underlying data from the source pixel other than Z.

4) MSAA can be handled in any number of ways, but at the very least, you can deal with it the same way Inferred Lighting does.

5) There's no reason for specularity to suffer from the typical Light-Prepass problem of color corruption by using diffuse color multiplied by spec intensity to fake specular color. Instead you could just evaluate the SH with the reflection vector. Of course, one does need to consider that given the low frequency of the SH as it applies to specularity...

6) Inferred Lighting evaluates the lighting at low frequency and upscales. Unfortunately, if you have very high frequency normal detail (we generally do), this is bad as this detail is mostly lost as their discontinuity filter only deals with identifying normals at the facet level, and not at the texture level. The suggested method isn't dependant on normals at all as lighting is accumulated independent of them, so it doesn't suffer from that problem.

7) You can start to do a lot of strange stuff with this. For example:
- want to calculate a simple GI approximation? Basically do the standard operating procedure Z based spherical search used in most SSAO solutions, except when a z-texel "passes", accumulate it's SH solution multiplied by its albedo and a transfer factor (to dampen things). Now you've basically got the surrounding lighting...
- want to handle large quantities of particles getting lit without doing a weird forward rendering pass that violates the elegance of your code? Do a 2nd Z-pass, this time picking the nearest values, and render the transparent stuff's Z into a new Z buffer. Now, regenerate the SH buffer using this second nearer Z-set into a new buffer set. You now effectively have a light volume, so when rendering the individual particles, simply lerp the two SH values at the given pixel based on the particle's Z (you could even do this at vertex level sampling the two SH-sets per-pixel seems cost prohibitive). Of course, this assumes you even care about the lighting being different at varying points in the volume, as you could just use the base set.
- if you rearrange the coefficients and place all the 0th coefficients together in one of the SH buffers you can LOD the lighting quality for distant objects by simply extracting that as a loose non-directional ambient factor for greatly simplified shading.
- you can rasterize baked prelighting directly into the solution if your prelighting is in the same or a transformable basis... assuming people still care about that.
- if you construct the SH volume, you could use it to evaluate scattering in some more interesting ways... You could also use this "SH volume" to do a pretty interesting faking of general volumetric lighting. If one were to get very adventurous, you could - instead of using min-z-distance as the top cap, simply use the near plane, and then potentially subdivide along Z if you wanted, writing the lighting into a thin volume texture.

So, the "bad":
- lots of data, as we need 12 coefficients per lit texel. That's a lot of read bandwidth, but really its not any more expensive than every pixel in our scene needing to read the Valve Radiosity Normal Map lighting basis, which we currently eat.
- Dealing with SH's is certainly confusing and complicated. For the most part this only involves adding SH's together which is pretty straightforward. But unfortunately converting lights into SH's is not free. The easiest thing to do is pre-evaluate a directional light basis and simply rotate it to the desired direction. Doable, given we're only dealing with 4 coefficients. Or, directly evaluate the directional light and construct its basis. Once you've a directional light working, you can use it to locally approximate point and spot lights by merely applying their attenuation equations. Of course, if you don't need any of the crazier stuff, you could just use a simpler basis (like the Valve one) where conversion is more straightforward.

Anyway, there we go. If anyone reads this, let me know what you think. It would seem this solution is superior to Inferred Lighting in its handling of lit alpha, as with their solution you can really only "peel" a small number of unique pixels due to the stippling, and the more you peel, the more degradation it causes in the scene to the lighting.

Anyway, for now until I can think of a better name, I'm calling it Immediate Lighting.