Game Entities and AI
[ ]
What are some ways we can define game levels and the entities in them?[ ]
What are some tradeoffs between class-inheritance-style, component-based, and ECS approaches to defining game entities?[ ]
How is game AI similar to and different from "real" AI?[ ]
What is the basic idea behind any two of the AI techniques in the Game AI section?
Activity: Reverse Engineering
Let's watch a bit of gameplay. Between your teams, answer these questions:
- How do you think the game world chunks could be organized and what data would go in each chunk?
- What are some differences and similarities between different game entities?
- How would you represent them in an inheritance-based system? A Unity-style component system? An ECS?
- How could you code these entities' behavior? Pick a framework from the previous question and go from there.
- Note that some non-player entities need to interact with each other!
Now, each team can talk through their own game design and discuss how to organize their data and define their entities using any of the approaches above.
Organizing Game Entities
How do we build game worlds and levels? These are often much bigger than we actually want to show at any one time, but also they're often smaller than the whole game. We must need another level of organization so we can group together active objects as well as know what objects to bring in next.
In side-scrolling games, we might have some data representing a compressed version of the game level—a tilemap or set of tilemaps, where maybe some tiles are "empty but spawn in this enemy". As the player moves right or left, we can decompress or discard one column or chunk of columns at a time, moving a pointer through the memory region, unloading too-far-away or defeated enemies, and loading in new enemies if space allows.
In many 3D games we can do essentially the same thing! If our level is a bunch of rooms connected by doorways, we can load in adjacent rooms and unload further rooms (despawning anything from those rooms that's no longer visible). As long as there are no lines of sight that make it more than one room past our doorway, this is fine. We can augment this approach to support situations like that in at least two ways:
- Use a volumetric fog effect to obscure further rooms, or make light fade out
- Pre-calculate which rooms might be visible from which other rooms, and work with such sets of rooms
Often, however, we don't have easy "doorways" (because we're looking at outdoor environments for example). In these situations, we might want to break our world into chunks and be sure chunks have enough tall geometry to occlude further-away chunks; or else start thinking of techniques like levels of detail where further-away things use lower-resolution models, or impostors where a textured quad stands in for far-away geometry.
Even if our world is composed of self-contained pieces we can load and unload as needed, what goes in those pieces?
One way to think about loading level chunks like rooms or areas (I'll just write "chunks" from now on) is that each chunk defines a set of entities—the static geometry, the enemies (usually lightweight enemy spawners which then either turn into or produce actual enemy entities), and so on—and they all get loaded into the world and activated along with the chunk. Entities can be loaded into one set per chunk or one global set, though in the latter case you might need to track what entities came from which chunk so entities can be unloaded when possible.
A second approach is to associate with each chunk (or with the world itself) a tree of objects. Every object has a parent object (which might be the world itself) in terms of which its position and orientation are defined, and objects can be queried for their parents or children. For example, when a character picks up an coffee cup the cup may be "re-parented" to the character's hand, and when it's set down on the table it will be re-parented again. Game engines like Unity3D have tended to use this approach since hierarchy can be helpful for organizing related objects and in some cases for broad-phase collision checks, but the trade-off is that one fixed hierarchy can often be limiting. Accordingly, Unity gives each object a set of user-defined tags as well as layers, and these additional axes allow for cross-cutting queries that traverse multiple object trees.
Neither way is clearly better, so what you choose will depend in part on personal preference. In both approaches you might have special-purpose objects representing level geometry, cameras (of which one can be "active" at a given time), lights, and so on as well as typical game characters.
Extensible Game Entities
Even if we know how game objects are arranged in a space, how are they defined internally? It's relatively easy if you say that a game entity is just its position, orientation, and shape, but in games we also need to track physics information and even entity-specific data like health, stamina, current target, or whatever else we need. Different types of game objects can have special behaviors too. In general-purpose engines there are two main approaches that are generally used. We'll explore how to realize them in Rust.
Interfaces and Dynamic Dispatch
The first approach is common in class-inheritance- or prototype-based object-oriented languages: Since users can define their own types implementing certain interfaces (either by explicit interface implementation or by inheritance), just treat the game world as a set or tree of Entity
subtypes. Each of these can have virtual functions like update()
or render()
, and the job is done.
Benefits of this approach include a simple conceptual model (if you're already used to runtime polymorphism) and drawbacks include poor efficiency (every call is a virtual call, every access of another entity is indirect, can't make effective use of cache) and a requirement that programmers commit to a specific type hierarchy early on in the project. This second drawback can be especially painful when we want new objects that act like distinct existing objects: for example, a powerup which moves like a fleeing enemy, or a projectile that can be destroyed before reaching its target. What often happens in such systems is that an uberclass emerges which has all the state and behavior necessary for any different combination of object properties, with flags to control which are active. This ends up being a debugging nightmare.
This issue can be mitigated by what one might call Unity-style component architectures, but really is just an application of composition-over-inheritance. In Unity, an entity has a vector of Component instances, so we can have for example a HealthComponent
on an object that also has a ProjectileComponent
to obtain more flexible tools for combining behaviors. Each Component
has its own update()
and other methods, and these are called by the entity's corresponding methods. The performance overhead is even worse (now we have another layer of indirections and virtual calls) and sometimes it can even lead to ugly hacks (we can't implement physics, collision detection, or restitution in regular components since different entities may have their components in different orders).
In Rust, we can achieve either approach using types like Rc<dyn Entity>
or Rc<dyn Component>
(maybe throwing a RefCell
in there if the ownership rules are tricky). Fyrox Engine (previously called RG3D) is based on this kind of mechanism. Note that things can get really tricky if objects need to store long-lived references to each other, and this should usually happen through some kind of entity handle rather than trying to store an &Entity
. Another avenue is not to let the game engine own our custom entity data, but to give us a kind of entity handle which we can use with our own collections of entity data. Then the engine is mostly unaware of our custom stuff, and game code is notified when entities are destroyed so that we can clean up their ancillary data storage.
Entity-Component Systems
Looking at the landscape of Rust game engines, most are written in the entity-component systems style, or ECS. Some of this is a coincidence (Rust became popular as this approach became popular; there was a widely-viewed Rust conference talk about ECS), and some of this is down to ownership and other Rust language features which make large object graphs difficult to work on.
In an ECS system, an entity is essentially an "identifier" or label associated with a collection of components which store data. Outside of the entities, a set of systems operate on components. In some ways this is similar to the Unity style, except that the entity is not a data structure that owns some components, but components are stored separately and associated with entities through other means; also, components in this model solely store data, and only systems operate on data.
Components are often stored in contiguous memory to maximize cache utilization and throughput: so, all entities' colliders are in one big Vec, and all entities' physics state are in another, and all entities' health values are in yet a third. So an entity is in effect an index into these collections. A system might even need to make database-style joins across multiple sets of components: for example, a collision system might need to process entities' colliders alongside their corresponding physics states.
Even under the umbrella of ECS systems, there are distinctions grounded in trade-offs: for example, archetypal ECS gives up on the ability to add and remove components to entities at runtime in exchange for faster iteration speeds through the fixed sets of components—different types of objects (i.e., different collections of components) must be defined in advance.
Game AI
Game AI is a broad field intimately connected to game design—it can be seen as the application of AI techniques to supporting player experience. As you work in games you'll accumulate more and more techniques, and find new ways to combine them in ways that are flexible and robust.
Some examples of AI techniques that are applied to games include:
- Path-planning
- Task planning/search
- Utility systems
- Evolutionary search algorithms
- State machines
- Behavior trees
- … and many more
Ultimately, all that matters for the player is that the AI acts in a way consistent with the game design. It's important that the four Pac-Man ghosts do different things (moving towards a point in front of the player, moving toward the player, moving towards a point away from the player, etc); it's important that the behavior of a Koopa Troopa in Mario is so simplistic. Game enemies that play "perfectly" can feel unfair or annoying—it's much more important that they are meaningfully integrated with the game rules and level design.
One way to organize AI in a game—to define how different agents in the game behave differently—is to enumerate the different types of behaviors, implement those behaviors somehow (either as code or as some data structure connected to a technique like behavior trees), and associate them with the types of entities. Often an inheritance-based or Unity-style game engine will assign a "brain" (maybe as one among several components) to each entity responsible for moving it around and making its decisions; sometimes player input (or input from remote players for networked multiplayer games) is also piped through this "brain" design.
A key challenge with game AI, brought to the forefront in Rust, is how to share information across distinct AI systems or agents. This can be achieved in an ECS setting using component joins, and in other systems may require a variety of queries to be run against shared, immutable game state (or, worst case, game state hidden behind RefCell or other interior-mutable abstractions). In AI as well as in regular programming, pervasive no-holds-barred global state can be problematic! So AI also has developed some techniques we can deploy towards managing the sharing of state among agents. Two popular approaches are blackboard systems (where multiple agents or systems can share information across one of potentially several imaginary blackboards) and environmental AI or affordance systems, where the agents' knowledge is somehow deposited into the world (the classic example is how ants leave pheromone trails); then, agents who happen across the environment will "learn" the information embedded there.
For some other resources, check out these links:
- The Total Beginner's Guide to Game AI
- The book Artificial Intelligence for Games, Millington & Funge
- The book series Game AI Pro, freely available online
- Red Blob Games has an extensive series of game AI tutorials