Workflow and Tooling

Key Points

  • What are some trade-offs between in-engine and external tools?
  • What are some of the things that might need to happen to game assets between their creation and eventual use in the game?
  • How can we associate related assets (spritesheets, animations, whatever) and maintain that association through runtime?
  • What is an interned string and why might it be useful?
  • What are some reasons to try and support hot-reloading of assets in your engine?
  • What are some ways to reduce iteration time on game design and game code changes?
  • What is deterministic record and replay, why is it interesting, and why is it hard?
  • What are some things that make saving and loading game state tricky?

Workflow

Recommended reading: Log onto O'Reilly Safari and find Game Engine Architecture; read chapter 7 ("Resources and the File System") from section 2 onwards; chapters 10 and 15 also have some goodies.

As you now definitely know, making a game is tough! It requires making lots of other things and stitching them together, sometimes in very particular ways—textures must be in this folder and be named in this way, spritesheet animation data must be named the same as the corresponding texture, or whatever—and sometimes the data that come out of our tools need to be processed to be usable (for example, if we create separate images for our game objects we might want to combine them all together into one big texture for efficiency).

Today we're going to explore a variety of topics and tradeoffs around how we get game data—assets like level layouts, spritesheets, enemy statistics, and so on—into games.

In-Engine vs External Tools

The first question is how the assets are made in the first place. Maybe an artist has made a drawing of a potion flask or laid out a tileset in an image editor, or maybe a designer has filled in a CSV file with enemy statistics for every enemy in the game. Image and text editors are great, and general-purpose game level editors like Ogmo Editor are great too. But there is some space in between what those tools offer and what our game engine might allow—for example, in Ogmo a designer might use several terrain layers while our engine only supports one, or our engine might require that images be a certain size—and we could prefer to have designers build assets directly inside of our game engine (perhaps in a special "design mode"). This means we can implement purpose-built level editors, creature editors, animation tools, and more which feature perfect integration with our running game code. This can also make design tasks easier: it's probably nicer to click on an entity and edit its properties rather than hunt through a spreadsheet or CSV file.

An interesting middle-ground is to use custom tools based around a web server hosted by the game (Naughty Dog does this, or did for a long time). In production builds the server code can be removed, but for development it's helpful to have the relatively stable browser environment for reading through diagnostics and tweaking game content.

For our course, it probably makes sense to use external tools where possible. But I could be persuaded to treat internal tooling as a feature point.

Asset Pipelines

No matter where assets come from, they may not be in a usable state. For example, certain platforms might run faster if images all use the BGRA8888 pixel format, or if non-power-of-two images are padded to the right size, or if we combine many small images into one large one (and generate atlas metadata in the process). While we could do these things at runtime when we load images, we could save some latency if we did it before running the game (perhaps debug builds could also do this automatically at game launch time, or whenever assets change on-disk).

Assets are also often made up of many smaller assets. Consider the definition of a game character: it's not only the character's stats, but also which animations and spritesheets and sound effects it uses. All of these assets must be tied together somehow and sometimes even packaged together into a single file to be loaded by the game engine when the character comes into play. Asset pipelines handle this task too.

You'll probably have a whole set of modules dedicated to loading (and possibly editing) assets and processing them as need be; they might even make up an additional crate besides your library crate. Look up the cargo documentation on "workspaces" for advice on how to structure multi-crate libraries.

To make sure not to forget this step, consider running your game using a Makefile or adding a build.rs script (see the Cargo documentation for details), or even implementing hot-reloading.

Asset Management

In many games, all access to assets happens through an AssetManager or similarly-named type which is responsible for loading bundles of assets and unloading them when they're no longer needed. There are three main motivations for this:

  1. It's very difficult to reason about the performance and RAM usage of your program if any piece of code might e.g., cause a file to be read into memory
  2. If resources are owned by a specific structure, we know that there won't be any possibility of dangling references, redundant double-loading of resources, or other issues that could confuse us when programming
  3. If resources are owned by a specific structure, we can know exactly what resources are in use at any given time, allowing us to implement hot reloading and other cool tricks.

Asset managers are a little bit like programming language garbage collectors, so we could imagine setting a simple one up as follows. I'm assuming TextureRef and AnimationRef are cheap ways to refer to particular textures or animations by "name", and from which a path can be computed. This way we avoid doing lots of operations on strings. For example, we might use interned strings where the reference is an index into an array of strings defined at compile-time, perhaps computed from a traversal of the filesystem during the asset pipeline step and included with an include!(content/strings.rs) macro invocation:

textures.rs:

#[derive(PartialEq,Eq,Clone,Copy,Hash,Debug)]
pub struct TextureRef(usize);

// You would make this with a build.rs script or something,
// maybe paired with a procedural or macro_rules macro tref!("rects.png")
impl TextureRef {
    pub const RECTS:Self = Self(0);
    // A const for each texture

    const PATHS:[&'static str;1] = [
        "rects.png",
        // a corresponding path from the asset root for each texture
    ];

    // A way to get the TextureRef (if any) for a path at runtime.
    // Most of the time you'd get a TextureRef via TextureRef::RECTS or whatever.
    pub fn ref_for(path: &str) -> Option<TextureRef> {
        Self::PATHS.iter().position(|p| p == &path).map(TextureRef)
    }
}

use std::collections::HashMap;
use std::path::{Path, PathBuf};

struct Assets {
    asset_root: PathBuf,
    textures: HashMap<TextureRef, Rc<Texture>>,
    //animations: HashMap<AnimationRef, Rc<Animation>>,
}
impl Assets {
    // Takes anything path-like as an argument
    pub fn new(asset_root: impl AsRef<Path>) -> Self {
        Self {
            asset_root: asset_root.as_ref().to_owned(),
            textures: HashMap::new(),
        }
    }
    // Needs &mut self since it might load data
    pub fn get_texture(&mut self, t: TextureRef) -> &Rc<Texture> {
        self.textures.entry(t).or_insert_with(|| Rc::new(Texture::with_file(
            &self.asset_root.join(TextureRef::PATHS[t.0]),
        )))
    }
    // Throw away assets that aren't used anymore; call occasionally from main
    pub fn cleanup(&mut self) {
        self.textures.retain(|_k, v| Rc::strong_count(&v) > 1);
        //self.animations.retain(|_k, v| Rc::strong_count(&v) > 1);
    }
}

This is okay and will probably be enough for this course. You do need to call cleanup from time to time, and you'll need to pass a value of type AssetManager around wherever you need it, but this is fine. A fancier version would load and unload assets in bundles to make loading times and memory management a bit more predictable.

Hot Reloading

We start playing our game and notice we made a mistake—a pixel is the wrong color, an animation is using the wrong timings, there's a collision bug. We could certainly close the game, make the change, and run it again, but this is painful and time-consuming (Rust has many advantages, but fast compile times are not among them). So we use a trick in games called hot reloading (which, it turns out, is also popular in the web app community) to experiment as quickly as possible. Broadly, there are two things we might want to hot-reload: assets and code.

Hot-Reloading Assets

It would be very cool if you could change an art asset or game data file and see the result in your running game immediately. Luckily we can do that! The trick is to hide resource access behind handles:

use std::cell::RefCell;
use std::cell::Ref;

pub struct Handle<T> {
    inner:Rc<RefCell<(T,usize)>>,
}
impl<T> Handle<T> {
    pub fn get(&self) -> Ref<'_, (T,usize)> {
        self.inner.as_ref().borrow()
    }
    pub fn clone(h:&Self) -> Self {
        Self { inner:Rc::clone(&h.inner) }
    }
}

A Handle is a limited wrapper around Rc<RefCell<(T,usize)>> which admits getting a Ref, which can be dereferenced for drawing, in e.g. draw_sprite:

self.bitblt(&s.image.get().0, s.frame, s.posn);

The two elements there are the texture itself and its version, so interested code can notice when the texture has changed. It's probably not a big deal and could be left out. Another thing to note is that there's nothing stopping code from using get() followed by clone to get a durable copy of the Texture that definitely will not change due to hot reloading.

So how does hot reloading work? I threw something together with the crate notify version 4.0.15:

use std::sync::mpsc::{channel, Receiver, TryRecvError};
use notify;

pub struct Assets {
    asset_root: PathBuf,
    // We're giving out handles now
    textures: HashMap<TextureRef, Handle<Texture>>,
    // Animations, etc, whatever
    // Ok, this is new: It receives events from the notify watcher.
    rx: Receiver<notify::DebouncedEvent>,
}
impl Assets {
    pub fn new(asset_root: impl AsRef<Path>) -> Self {
        // ... register filesystem watchers with crate notify
        use notify::{RecommendedWatcher, RecursiveMode, Watcher};
        use std::time::Duration;
        // Get a sender and receiver
        let (tx, rx) = channel();
        // Give the sender to a new watcher
        let mut watcher: RecommendedWatcher = Watcher::new(tx, Duration::from_secs(2)).unwrap();
        // It will watch the asset root recursively
        watcher
            .watch(&asset_root, RecursiveMode::Recursive)
            .unwrap();
        // I don't want to deal with putting the right Watcher implementor
        // into Assets as a trait object or as a generic argument.
        // But the Watcher can't be dropped before Assets is.
        // The "solution" will be to just leak the watcher and never drop it.
        // This means we can never call `watch` or `unwatch` again.
        // In a real program you'd use cfg statements to use the right
        // concrete Watcher type on a field in `struct Assets`.
        // It's the only thing I don't like about the notify crate.
        Box::leak(Box::new(watcher));
        Self {
            asset_root: asset_root.as_ref().to_owned(),
            textures: HashMap::new(),
            rx,
        }
    }
    // Basically the same as before, except we make a Handle
    pub fn get_texture(&mut self, t: TextureRef) -> &Handle<Texture> {
        self.textures.entry(t).or_insert_with(|| Handle {
            inner: Rc::new(RefCell::new((
                Texture::with_file(&self.asset_root.join(TextureRef::PATHS[t.0])),
                0,
            ))),
        })
    }
    // Throw away assets that aren't used anymore
    pub fn cleanup(&mut self) {
        self.textures.retain(|_k, v| Rc::strong_count(&v.inner) > 1);
    }
    // To be called from main from time to time: check for pending filesystem events!
    pub fn check_events(&mut self) {
        use notify::DebouncedEvent;
        loop {
            // Get as many as we can, but don't block on it
            match self.rx.try_recv() {
                Ok(event) => match event {
                    // Writes or creates mean we need to reload the texture data

                    // In a real program you'd use the file extension
                    // or part of the path to determine whether to
                    // update textures, animations, whatever
                    DebouncedEvent::NoticeWrite(path)
                    | DebouncedEvent::Write(path)
                    | DebouncedEvent::Create(path) => self.update_texture(path),
                    // Ignore other events for now
                    _ => {}
                },
                Err(TryRecvError::Empty) => break,
                Err(TryRecvError::Disconnected) => panic!("Connection to asset watcher broken!"),
            }
        }
    }
    // This is where the reload happens and we use interior mutability in Rust!
    fn update_texture(&mut self, p: PathBuf) {
        // Remove the working directory which is part of the paths we get from notify
        let p = p.strip_prefix(std::env::current_dir().unwrap()).unwrap();
        // Remove the asset root so we can get into the kinds of values we'd see in PATHS
        let p = p.strip_prefix(&self.asset_root).unwrap();
        // Now if there's a texture we know about at this path...
        if let Some(tref) = TextureRef::ref_for(p.to_str().unwrap()) {
            // And if it's in the textures map (i.e. currently in use)...
            self.textures.get(&tref).map(|tex| {
                // Replace it with the new data and bump its version number
                tex.inner
                    .replace_with(|(t, v)| (Texture::with_file(&self.asset_root.join(p)), *v + 1))
            });
        }
    }
}

It's a mouthful, but this is a basic version that should work well enough. Another good option avoiding notify (but not avoiding Handle) would be to have a keyboard button to reload all assets; another design would be to not use Handle, but call get_texture every time you need a texture (so Sprite would have a TextureRef rather than a Handle or Rc<Texture>). There are dozens of other approaches and it's up to you to pick one you like.

Hot-Reloading Code

If hot-reloading assets is good, hot-reloading code must be even better!

I won't spend much time on this (see So You Want to Live-Reload Rust for (way) more detail), but we've got basically two approaches: have a binary driver crate run a game crate as a library, owning the game loop and calling its functions, reloading it whenever it changes; or have a binary driver run a game crate as a child process, sending it inputs and receiving outputs over a socket and killing and re-running it whenever it changes on disk.

There are some crates to facilitate each of these approaches, but this week's lecture notes are already getting long.

Hot-reloading code is nice, but in my opinion much of the benefit can be obtained from reloading assets (and moving as much of the game rules as possible away from code and into data files) and much of what's left can be had by deterministic record-and-replay or save states.

Messing with Game State

What's record-and-replay? Well, we make games that run on computers by processing initial states and sequences of inputs into outputs. Sometimes random stuff happens, but if we control the seed of the random number generator then we should be able to get to the same game state by replaying the same sequence of inputs from the same initial state. In theory this makes sense, but it's complicated by a few things:

  1. Fix your timestep!
  2. If you use rand, be very sure not to just use thread_rng, or at least to seed it with a predictable value.
  3. Try not to write code that depends on the system clock, or if you do make it a parameter.
  4. If using multithreading, be extra sure your code doesn't produce different outputs on different thread schedulings or delays
  5. Floating point math is deterministic for a given compilation of your program, but not necessarily across different runs due to optimizations (sorry!)

A corrolary here is that your game simulation code shouldn't be directly checking which buttons are held and where the mouse is, but that should be abstracted away and handled by your engine so that you can record it and play it back later. All that being said, if you can stick with non-floating point types or don't mind replays breaking from time to time, structuring your game as a data processing pipeline from initial state to a final state through a series of outputs can be really useful for debugging (just replay buggy sequences), testing, AI development, and many other tasks (including networking).

Unfortunately, those constraints are quite severe. If we can't perform record and replay, we can at least snapshot the game state (you do have a GameState struct, right?) and save it to file for reloading later. This way we can easily jump to different parts of the game, try out some actions, and make sure things are working. Snapshotting can be a good way to implement networked multiplayer too, and it even lets us do things like have the AI try out hypothetical moves and then rewind to before the move was performed. Passing around game states also lets us increase our effective display framerate by allowing rendering to interpolate between two game states (previous frame and next frame).

A good middle ground between record-and-replay and just snapshotting is not just to periodically snap states, but also record the inputs that happen in between. Then at replay time, we load a state and play inputs until we get to the next state, which we then load (maybe interpolating between the states we reached by inputs and the state we're trying to get to). To load a particular time point, we jump to the last state before that timepoint and then replay. This state synchronization can be extremely effective for networking code especially when combined with action prediction.

Any of these three approaches could help with our present task of shortening iteration times. For example, when the game code changes we could save a snapshot of the game state, quit the game, load the game, and load the state (maybe via a command line argument). If we have record and replay or state synchronization, we could write tests for game optimizations or bug fixes by recording a play session, implementing the optimization, and ensuring that the resulting state is "identical enough" to the original terminal state. We could even show diffs between playthroughs to see how the game code has changed the outcome (Inform 7 does this with its skein view).

In conclusion, it's really important to notice when part of your workflow hurts and find ways to mitigate that. Quitting and restarting the game is the enemy here—you lose time to compilation and booting up and getting back to the right part of the game, and by that time you've forgotten what you wanted to test. If you can make the game inside of the game you're in great shape; if not, hopefully you can at least test out large categories of changes without restarting the game (whether because the game is described in data assets or because you have code live-reloading). If you must restart the game (or get it into a known-good situation for a test), you should at least be able to snapshot and restore game states quickly.

If you have enough time to swivel around in your chair while you're waiting to try out your change, you're probably waiting too long. If it's compiling Rust code then you're pretty much stuck, especially in release builds. One possibility is to compile your game with few or no optimizations, but your engine and other dependencies with lots by putting something like this in Cargo.toml:

# Set the default for dependencies.
[profile.dev.package."*"]
opt-level = 3