Week 13: 3D Animation/How Do I…?
Key Points
- Today's Lab: Animation Lab
[ ]
Why is animation important in a 3D game?[ ]
UV Animation[ ]
What are UVs again, and how can we do animation with them?[ ]
How do our model vertices/our vertex shader need to change to support UV animation?[ ]
How can we set up our model to support UV animation?[ ]
How is UV animation similar to and different from sprite animation?
[ ]
Skeletal Animation[ ]
What is a rig or skeleton?[ ]
How is a model rigged, and what does that mean?[ ]
How do our model vertices/our vertex shader need to change to support skinned animation?[ ]
What is a joint or bone, and how does it relate to animation?[ ]
Think of two different ways to compute a local-to-global transform for a joint in a skeleton (one top-down, one bottom-up)[ ]
What are the key transformation steps for doing skeletal animation?[ ]
How is skeletal animation similar to and different from paperdoll animation?
[ ]
How could we drive both UV and skeletal animation using the same system?
Animation, Revisited
In 3D, the meaning of "animation" is stretched a bit farther than in the 2D games we've been working with so far. Since objects are made of a bunch of triangles, we can animate their geometry in arbitrarily complex ways—and that's before we think about what to paint on those triangles! While physics or pre-programmed object movements can give a similar effect to animation, there are two main approaches to animation per se in 3D renderers: UV (or texture animation) and skeletal animation (also called skinned).
UV Animation
UV animation is named for the \(u\) and \(v\) texture coordinates used in texture mapping. Instead of animating the physical positions of the triangles making up a mesh, UV animation changes the UV-mapping over time. This can be done to achieve an effect like sprite animation (remember, we did sprite animation by changing the offset in a spritesheet) or for effects like water or moving light (by smoothly adjusting the UVs over time). Other effects that rely on scaling, rotating, or otherwise manipulating UVs are also types of UV animation, but we'll focus just on translating UVs.
UV animation often depends on defining the right texture addressing mode; remember that when we define a texture sampler (to pick texels from a texture), we also define what should happen if texture accesses are out-of-bounds. Effects that depend on scrolling a texture (while keeping the mesh triangles stationary) will need to make use of repeating modes to achieve the correct effect.
Generally a UV animation will work by applying some offset uniformly to a batch of UV indices in a model (either by directly modifying its ModelVertex
data and reuploading the buffer, or in a shader). A UV animation might look something like this:
struct UVAnim { target_uvs:Vec<usize>, // which UVs to alter... timings: Vec<f32>, // frame timings uv_offsets:Vec<(f32,f32)>, // how much to change the UVs for each timing interpolate:bool // whether to move smoothly or jump between timings }
To sample a UV offset can determine which frame we're in like so:
let t = self.timings.last().unwrap().min(t); let kidx = self .timings .iter() .zip(self.timings[1..].iter()) .position(|(t0, t1)| t >= *t0 && t <= *t1) .unwrap(); let t0 = self.timings[kidx]; let t1 = self.timings[kidx + 1]; let tr = (t - t0) / (t1 - t0); // tr is between 0.0 and 1.0. let off0 = self.uv_offsets[t0]; let off1 = self.uv_offsets[t1]; let off = if self.interpolate { lerp(off0, off1, tr) // Linear intERPolation; off0 + (off1 - off0)*tr } else { off0 };
But how do we apply off
to each of self.target_uvs
? It actually probably shouldn't be used to alter the model's vertex data, since then we'd lose the old texture coordinate information when we adjust it by offset. We could instead copy the model's vertex data, modify its texture coordinates by adding the output offsets, and then upload it. A better approach would be to add the UV offsets as an additional field on ModelVertex
; this won't work with instancing, but it would be a good solution (we'd just have to change the shader to add the offsets to the texture coordinates).
If we want to support instancing, we'd need to provide the instance-specific UV offsets in the instance data; we might not actually have enough instance parameter slots to do this, so we could achieve it by writing the UV offsets for all instances into a texture, or by adopting a more restrictive animation scheme (for example, offsetting all vertices by the same UV offset rather than per-vertex offsets; or allowing only up to four distinct offsets, with the model vertex defining which of those four sorts of offsets should be used).
Since a model might have thousands or tens of thousands of vertices, and we may not want to animate UVs on all of them, we generally have two approaches available to us: either we give each vertex information describing which texture offset, if any, it should use; or we break our model into several meshes which are animated and drawn separately. This is often done for things like characters' faces or equipment they're holding.
Skeletal Animation
- rigs: "Skeletons"
- scale/rotate/translate
- joint to world transform
- vertex skinning
- influence joints
- weights
- blending as midpoint
- keyframe animation
- targets (joints), channels (values), samplers (interpolation)
- animation blending
Activity: How Do I…?
- Random groups
- Three questions per group