Heist – Brainstorming AI Behavior

I suppose I should be upfront about this, but a lot of my interest in AI behavior was reignited by watching H3VR’s devlogs, specifically the ones on its “Sosig” enemies. Despite their comical and abstract appearance, these enemies are quite dynamic and behave in an interesting, exploitable way. They can respond to audio inputs, seek cover, and be blinded or suppressed, just to name a few; and their physics-enabled body and weapon allows them to be knocked around and disarmed through physics interactions. They’re equally threatening and funny, which is a very noble goal to achieve.

One important paper that was cited in these devlogs was about Killzone’s AI. In it, the authors describe how they went about making an AI system that chose optimal positions to move towards, as well as efficiently calculating visibility.

From the paper, the process of tactical position picking.
From the paper, polar visibility info.

I find these to be quite interesting, and certainly beneficial to my exploits in NPC behavior. However, there are some factors I feel warrant further consideration.

Waypoints are so 2006

The paper specifies a “waypoint” system. This is much more similar to Source’s old Nodegraph system than the NavMesh system s&box currently uses. Waypoints are much more limiting (you’d be only able to move from one point to another, instead of the entire movement plane); but it also offers efficient position processing – less places to go means less decisions to make. I would have to adapt this system to do anything meaningful with it.

One thought I’ve had is to have several evaluation functions that correspond to the desirability of a location, then simply randomly picking some number of points in a radius and finding the best one out of, say, 50 points. This obviously can result in some bad results (for example, there’s some nearby cover but none of the random points is behind it), but is also the simplest conceptually and deviates little from the system described. Alternatively, instead of picking random points, we systematically find the maxima of the sum of these functions, but that might be very expensive to do.

Another thought is to pre-generate some potential points of interest, then using them in this system. There’s two big questions though: How do you pick these points, and how do they respond to dynamic environments? It would be great to have physical objects act as cover, and have the AI scuttle away when it breaks. I remember Half Life: Alyx having AI behavior that allowed this, but I can’t find any meaningful information off a google search. In theory, you would scan the map for a list of static PoI (around corners, cover etc), then add dynamic PoIs around objects. I don’t need PoIs out in the open – we can navigate to any point using the NavMesh system, after all.

Lastly, it might be possible to abandon the thoughts of evaluating points entirely. Using the polar visibility check system, I can find a cone that is the most covered from threats, then use that to converge on a good position to stand.

You fight like an NPC

There’s also a somewhat fundamental issue with implementing the stuff on this paper – the goal of the AI. The paper creates AI that is seemingly smart, but still plays like a non-player character. Right now, my AI fights and moves much more like a player than an NPC, as it strafes around a lot and doesn’t consider cover at all. They’re more like bots, simply put.

I might have to tone down these features, and make them more predictable and still, however. If there are a lot of NPC enemies coming in from a certain direction, their maneuvering will be ineffective anyways. Plus, there’s always the concern of CPU cost. Strafing means a lot of recalculating paths, and that might suck. One concern is that for these NPCs to work on almost every map with a NavMesh, I do need to account for situations where cover is nonexistent (like the center of construct) as well; but then also comes the question of “how much cover is not enough”? I’ll have to make a decision on this down the line.

Perhaps I should adapt the current AI into some real bots, like for DM98? Might be fun playing against them.

Visibility and Awareness

Right now, my NPCs simply use traces to track targets – one trace per target per tick. This might add up quickly. However, the paper’s polar visibility system also uses waypoints, so it’s not directly applicable either. If I just fired 8 traces for every position I wanted, that would be much worse!

That being said, there is one inspiration I drew from the polar visibility system, and it addresses one issue I found my NPCs have. Since much of their movement is based on a random position from an origin position, they tend to move themselves towards corners and walls. And since they look where they move, now they’re also pointing towards the wall – hardly intelligent! Having a “polar trace” would allow the NPC to sense out where is the “center” of the location they’re in, and try to position themselves in a way that maximizes their visibility.

Another part of awareness is sound. Right now, there’s no real way to detect a sound being played. While there’s a vsnd system that potentially allows me to detect that, it might be a better idea to create some sort of SoundEvent every time a sound that can be detected is played (like shooting, reloading). This allows me finer control over what kinds of sound is detectable; and it can also stop NPCs getting agitated from random noises.

Pathfinding

All of this also made me realize that eventually, I will need to modify Sandbox’s NavMesh function to do weighted pathfinding. For example, a path that is exposed to an enemy’s line of fire is less desirable than a path that provides cover.

Looking around game AI related papers results in some hits about “influence mapping”, which might be useful. I need to do a lot more research on it, though.

Fuzzy State Machines

By doing research I’ve also came across the idea of fuzzy states. That is to say, instead of the NPC being in one state at all times, each state is a weighted fraction determining how much of a state they are in. I’m not sure how applicable this is yet, but I’ll leave this as a footnote to return to later.

What a Headache!

As much as I liked the system outlined in this paper and the H3VR sosigs, it seems that I’ve got a lot to figure out before I can proceed with any big ideas. Hopefully it’ll work itself out when I actually start writing stuff – that tends to happen a lot.