# Week 11: 3D Cameras

## Key Points

• Today's Lab: Camera Cloning
• `[ ]` How are 3D vertices transformed into positions on a 2D screen?
• `[ ]` What is the difference between a perspective and orthographic projection?
• `[ ]` What do we need to know to describe a camera shot in a 3D game?
• `[ ]` What are some challenges of positioning cameras in 3D spaces?
• `[ ]` What are some additional challenges introduced by interactivity?
• `[ ]` In your own words, explain how one or more fixed camera angles could work in a game and what information they need.
• `[ ]` In your own words, explain how first-person cameras work and what information they need.
• `[ ]` How is our geometry stuff from last week relevant here?
• `[ ]` In your own words, explain how orbit cameras work and what information they need.
• `[ ]` How is `engine3d` different from `engine2d`?
• `[ ]` How is `engine3d` and the `camera3d` example different from our `collision3d` starter?

## Check-in: Progress on 3D Games

Pairs of teams should get together and discuss what you've been up to with your 3D game. What have you implemented so far? What are the roadblocks you're up against? Can you figure out together a way to solve them or work around them by tweaking the design?

What has teamwork been like? Can you think of ways to improve it or make it more equitable?

## 3D Cameras

Last week the notes included this little buried treasure:

The extra cool thing about matrices is that they can be inverted. We can compute a local-to-world transform as above, but we can also convert a world-to-local transform by inverting each matrix along the way from the world frame of reference to the object's frame. This is important when we need to, say, put a coffee cup in a character's hand; but it's also important for viewing the scene in the first place! If we have a transform that positions a virtual camera in the world somewhere, and we know the world positions of all the objects in the scene, then we also need a way to transform those world positions into the camera's frame of reference—this is just the inverse of the camera's transformation. From there we can apply perspective projection and map the 3D world onto a 2D viewport.

(If you want a refresher on matrix transformations, check out this nicely illustrated article.)

After we apply the inverse camera transformation to every vertex in the scene (on the GPU, natch) we know where every object will be with respect to the camera—but we don't yet know what the camera will "see". Real digital cameras have a rectangular image sensor in their body which measures light (focused by the camera lens) from the scene. You can imagine that there is a four-sided pyramid shape projecting from the camera out into the world, and anything contained within the planes of the pyramid is in principle viewable by the camera. Computers don't like infinity, so we add a far plane to that pyramid; computers also don't like infinitely small points, so we put a near plane in too, where the camera's image sensor would be. What the camera sees—and what we'll eventually map onto the viewport—is the visible portion of the scene from the perspective of that near plane, normalized to fit into that pyramidal shape (the camera frustum). We call this final coordinate space clip space. By playing with the relative distances between the left and right planes (or top and bottom planes, or near and far planes) we can achieve many effects simulating camera field of view and other properties. (Santell's site has some good visualizations of this as well; here are a couple more.)

This frustum shape is what gives us the sense of perspective we need to make a scene feel 3D: farther objects are smaller (the plane they're on has to be shrunken to map onto the near plane) and nearer objects are larger (their size on their plane is closer to the size they'll be when projected onto the near plane). Since scene vertices are all defined in terms of homogeneous coordinates, we can apply a transformation which scales vertices' homogeneous w coordinate depending on the distance from the camera (their z coordinate!), and then divide out that w coordinate when returning to 3D coordinates to achieve sizes varying with distance. In the special case where our far plane is just the same size as our near plane, we have what's called an orthographic projection (parallel lines stay parallel).

In some sense this is where we have the key payoff of using homogeneous coordinates for everything: translations, scaling, and rotations all use one kind of matrix, which means that the camera projection code can uniformly (homogeneously) transform any object's location and size in space.

To sum up, object vertices go through a chain of transformations up to the point where they're drawn onto the screen:

Model space ★ model matrix ⏩ world space ★ view matrix ⏩ view space ★ projection matrix ⏩ clip space

## Interactive Cameras

So that's 3D graphics programming—at least, that's how we get vertices from the world to the screen. Somehow we define a camera transformation (a matrix, or an eye position/at direction/up direction, or a position and a quaternion) and parameters like frustum plane positions (maybe determined via field of view variables), and we get a way to go from world triangles to screen pixels. But how do we decide where to put the camera and what to point it at? Especially in an interactive setting, we might want the player to move the camera around, or have the camera follow the player character through space; we might have certain aspects of our game level that are meant to be viewed up close and others that are never meant to be near the camera, or viewed from behind.

In today's lecture we'll outline a couple types of cameras and how to implement them.

### Fixed Cameras

The simplest way to make an interactive camera is not to make an interactive camera. Games like Resident Evil or Final Fantasy 7 use fixed perspectives in each room or zone to frame shots the way the level designers intended. Since each room has a fixed camera location and orientation, that information can be provided in advance. If a zone is very large, or if cuts between cameras are not desirable, we can also create a transition zone between the zones where the position and rotation of the camera will be interpolated from one shot to another along some series of points (the further into the transition zone the player is, the closer we get to the target camera shot, until we're entirely in the new camera zone).

One important question this brings up is how character control works: do directional inputs (e.g., on a joystick or `wasd` keys) move the player character forward, back, left, and right relative to the character? Or up, down, left, and right relative to the screen? For example, if I were to hold up on a joystick, would my character be moved upwards on the screen or would it move forward relative to its current facing? Because we know the camera matrix and the player character's local-to-world transform, we can easily convert directional vectors one way or the other—but we have to think about what feels best, especially if we have transitions between multiple camera angles.

In our new `engine3d` setup, you can call `camera_mut()` on an `Engine` to get a mutable reference to a camera, which has public fields for its field of view, position, target, and up vector. When entering a room you'd want to set the camera parameters; if you wanted to interpolate between two camera configurations, it would be best to define those per-room as a camera position and rotation and use `cgmath::Vector3::lerp` and `cgmath::Quaternion::slerp` to synchronize a movement from one to another (either on a timer or based on the player's position).

Aside: A `lerp`, or linear interpolation, is a function that takes two "points" describing endpoints of a "line segment" and a ratio `r` between 0 and 1, and returns a value which is the `r`-weighted average of the two endpoints. So, a lerp between 5 and 10 at 0.5 would be 7.5, or a lerp between (0,0) and (10,10) at 0.25 would be (2.5, 2.5). Lerps only make sense for certain data types—interpolating between rotations, for example, has to happen around the great circle of a sphere rather than along a line (only normalized quaternions are valid rotations), so `slerp` is the spherical analogue (and `nlerp` is the slightly less accurate but much more efficient normalized linear version).

### First-Person Cameras

The next simplest way to implement a camera is to lock its position and orientation to the player character's. In first-person games, the camera is placed at roughly chest or eye level with respect to the player, and its rotation in the `xz` plane is fixed to the character's orientation (generally controlled by changes in the mouse position). Since first-person characters generally only rotate in `xz`, the mouse also controls the pitch of the camera (and maybe the vertical angle of the player character's pointer, which is often some boring gun). Some games have characters that don't move like bipedal humanoids, but have independent movement and viewing directions, so only the camera position is locked to the player's position.

Our camera will now definitely need an `update` function which is called every frame to synchronize its position with the player's, and at this point it's helpful to provide an `update_camera` function which synchronizes the engine's camera with our `FPCamera`:

```pub struct FPCamera {
pub pitch: f32,
player_pos: Pos3,
player_rot: Quat,
}
impl FPCamera {
fn new() -> Self {
Self {
pitch: 0.0,
player_pos: Pos3::new(0.0, 0.0, 0.0),
player_rot: Quat::new(1.0, 0.0, 0.0, 0.0),
}
}
fn update(&mut self, events: &engine3d::events::Events, player: &Player) {
let (_dx, dy) = events.mouse_delta();
self.pitch += dy / 100.0;
self.pitch = self.pitch.clamp(-PI / 4.0, PI / 4.0);
self.player_pos = player.body.c;
self.player_rot = player.rot;
}
fn update_camera(&self, c: &mut engine3d::camera::Camera) {
// The camera's position is offset from the player's position
c.eye = self.player_pos + Vec3::new(0.0, 0.5, 0.0);
// This is the trickiest part of the code, since it relies on
// some knowledge of matrix math.

// This way we rotate the camera around the way the player is
// facing, then rotate it more to pitch is up or down.  Since
// engine3d::camera::Camera needs eye, target, and up vectors,
// we need to turn this rotation into a target vector by
// picking a point a bit "in front of" the eye point with
// respect to our rotation.  This means composing two
// rotations (player and camera) and rotating the unit forward
// vector around by that composed rotation, then adding that
// to the camera's position to get the target point.
c.target = c.eye
+ self.player_rot
* (Quat::from(cgmath::Euler::new(
)))
* Vec3::unit_z();
}
}
```

### Orbit/Over the Shoulder Cameras

A trickier type of character camera is sometimes called a follow or over the shoulder camera. These types of cameras are positioned so that the player and their feet are visible and are useful for action games where precise positioning is important. They'll also try to do things like lead the player character's movement so the player can see what's coming up (assuming what's in front is more relevant than what's behind) and catch up to the player character's position after it comes to a stop.

In these notes we'll discuss a simpler form of the follow camera called the orbit camera. While the first-person camera was positioned high up in the player character's body, the orbit camera is held behind and above the player character, looking slightly downwards at the character. The player can tilt the camera up or down (pitch), move it closer to or further from the player (changing its distance), or orbit the camera around the axis defined by the character's up direction (yaw). You can imagine that there is a selfie stick attached to the top of the character's head, and the player controls the angle and length of that stick.

```pub struct OrbitCamera {
pub pitch: f32,
pub yaw: f32,
pub distance: f32,
player_pos: Pos3,
player_rot: Quat,
}
```

Why are we using yaw/pitch angles here? We only have two rotational degrees of freedom and we don't need to interpolate camera positions. We could just as well write:

```pub struct OrbitCamera {
distance: f32,
rot: Quat,
player_pos: Pos3,
player_rot: Quat,
}
```

And then pitch the camera up or down by multiplying `rot` by a `Quat` representing a small `zy` rotatation, or orbit the camera around by multiplying `rot` by a `Quat` representing a small `xy` rotation. While the example code in the `camera3d` starter uses pitch and yaw angles and direct mouse control of those angles, the quaternion based approach would make it easy to perform smooth transitions between rotations:

```pub struct OrbitCamera {
distance: f32,
rot: Quat,
target_rot: Option<Quat>,
rot_timer: f32,
rot_duration: f32,
player_pos: Pos3,
player_rot: Quat,
}
```
```impl OrbitCamera {
//...
fn orbit_to(&mut self, q:Quat, duration:f32) {
self.target_rot = Some(q);
self.rot_duration = duration;
self.rot_timer = 0.0;
}
fn update(&mut self, events:&engine3d::events::Events, player:&Player) {
if let Some(tgt) = self.target_rot {
self.rot_timer += engine3d::DT;
if self.rot_timer >= self.rot_duration {
self.rot = tgt;
self.rot_timer = 0.0;
self.rot_duration = 0.0;
}
} else {
// use events to rotate rot up/down or left/right
// or move distance in/out
// or set a target rotation and duration with orbit_to
}
self.player_pos = player.body.c;
self.player_rot = player.rot;
}
fn update_camera(&self, c: &mut engine3d::camera::Camera) {
// The camera should point at the player
c.target = self.player_pos;
// If we have a target rotation slerp to that, otherwise use self.rot
let r = self.target_rot.map(|r| self.rot.slerp(self.target_rot, self.rot_timer / self.rot_duration)).unwrap_or(self.rot);
// And rotated around the player's position and offset backwards
c.eye = self.player_pos
+ (self.player_rot * r * Vec3::new(0.0, 0.0, -self.distance));
}
}
```

This kind of camera is convenient for players since they can control their view separately from the movement of the character, but still keep the camera focused on the character. If the camera gradually points itself towards the character's facing direction when not being manually controlled by the player, it gives players who aren't interested in camera control a way to ignore the camera while still offering fine-grained control.

One important consideration here is that while collision stops the player from getting into awkward situations (halfway inside an obstacle, say) the presented code offers no such guarantee for the camera. With camera control a player can put the camera inside of a wall or behind an obstacle, making it impossible to see the character or showing parts of the level that were meant to be hidden.

To determine whether we have a clear view of the player, raycasts or sphere-casts are generally used from the camera's eye to various points on the player. If those raycasts hit something else before hitting the player, that means the player is occluded and the camera's position should be fixed—either orbited back to where it was before, moved closer to the player until the collision would no longer occur, or the intervening obstacles should get cutouts or other visual effects to allow the character's silhouette to remain visible. This writeup describes some of the major design decisions for follow cameras in the Unity3D setting, but the overall exposition is very effective.

A quick efficiency aside: since a raycast is as expensive as gathering collision contacts, it's worthwhile to have one stage of processing that determines what raycasts will be necessary during a frame, then conduct the raycasts in a separate step, and then allow interested entities to process the results of those raycasts in a third step. This way we limit the number of trips through the collision geometry and keep the cache happy.

### Other Thoughts on Cameras

There are many more types of cameras that we haven't explored in depth. A fly camera is like a first-person camera, but allows for translational movement as well as rotation. Top-down or birds-eye-view cameras, orthographic cameras with an elevated angle (so-called isometric), and more are appropriate for different types of games. Camera design is also essential for 2D games, though we mostly ignored it earlier.

Camera design is game design, so one of the best venues for publications on the topic is the Game Developers' Conference. If you're interested in cameras I can recommend the classic GDC talk 50 camera mistakes. When your game has many types of cameras in it, composing lots of cameras becomes a challenge—but it is really key to cinematic third-person games. It's not uncommon to have dedicated camera programmers. Finally, camera special effects can be a very helpful polish tool for establishing game feel.

## Engine3D Starter

For this week's starter we'll generalize the code from last week into a crate called `engine3d`. Set up your workspace like so:

• A `threed` folder, with
• A `Cargo.toml` like this one:

```[workspace]
members = ["*"]
exclude = ["target", "content"]
```
• A `content` folder, with the contents of content.zip
• An `engine3d` folder, with the contents of engine3d.zip
• A `camera3d` folder, with a `Cargo.toml` like this:

```[package]
name = "camera3d"
version = "0.1.0"
authors = ["Joseph C. Osborn <joseph.osborn@pomona.edu>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
engine3d = {path = "../engine3d/"}
env_logger = "0.7"
rand = "0.8.3"
winit = "0.24.0"
cgmath = "0.18"
```
• And its `src/main.rs`:

```use engine3d::{collision, events::*, geom::*, render::InstanceGroups, run, Engine, DT};
use rand;
use winit;

const NUM_MARBLES: usize = 10;
const G: f32 = 1.0;

#[derive(Clone, Debug)]
pub struct Player {
pub body: Sphere,
pub velocity: Vec3,
pub acc: Vec3,
pub rot: Quat,
pub omega: Vec3,
}

impl Player {
const MAX_SPEED: f32 = 3.0;
fn render(&self, rules: &GameData, igs: &mut InstanceGroups) {
igs.render(
rules.player_model,
engine3d::render::InstanceRaw {
model: (Mat4::from_translation(self.body.c.to_vec() - Vec3::new(0.0, 0.2, 0.0))
* Mat4::from_scale(self.body.r)
* Mat4::from(self.rot))
.into(),
},
);
}
fn integrate(&mut self) {
self.velocity += ((self.rot * self.acc) + Vec3::new(0.0, -G, 0.0)) * DT;
if self.velocity.magnitude() > Self::MAX_SPEED {
self.velocity = self.velocity.normalize_to(Self::MAX_SPEED);
}
self.body.c += self.velocity * DT;
self.rot += 0.5 * DT * Quat::new(0.0, self.omega.x, self.omega.y, self.omega.z) * self.rot;
}
}

trait Camera {
fn new() -> Self;
fn update(&mut self, _events: &engine3d::events::Events, _player: &Player) {}
fn render(&self, _rules: &GameData, _igs: &mut InstanceGroups) {}
fn update_camera(&self, _cam: &mut engine3d::camera::Camera) {}
fn integrate(&mut self) {}
}

#[derive(Clone, Debug)]
pub struct FPCamera {
pub pitch: f32,
player_pos: Pos3,
player_rot: Quat,
}

impl Camera for FPCamera {
fn new() -> Self {
Self {
pitch: 0.0,
player_pos: Pos3::new(0.0, 0.0, 0.0),
player_rot: Quat::new(1.0, 0.0, 0.0, 0.0),
}
}
fn update(&mut self, events: &engine3d::events::Events, player: &Player) {
let (_dx, dy) = events.mouse_delta();
self.pitch += dy / 100.0;
self.pitch = self.pitch.clamp(-PI / 4.0, PI / 4.0);
self.player_pos = player.body.c;
self.player_rot = player.rot;
}
fn update_camera(&self, c: &mut engine3d::camera::Camera) {
c.eye = self.player_pos + Vec3::new(0.0, 0.5, 0.0);
// The camera is pointing at a point just in front of the composition of the player's rot and the camera's rot (player * cam * forward-offset)
let rotation = self.player_rot
* (Quat::from(cgmath::Euler::new(
)));
let offset = rotation * Vec3::unit_z();
c.target = c.eye + offset;
}
}

#[derive(Clone, Debug)]
pub struct OrbitCamera {
pub pitch: f32,
pub yaw: f32,
pub distance: f32,
player_pos: Pos3,
player_rot: Quat,
}

impl Camera for OrbitCamera {
fn new() -> Self {
Self {
pitch: 0.0,
yaw: 0.0,
distance: 5.0,
player_pos: Pos3::new(0.0, 0.0, 0.0),
player_rot: Quat::new(1.0, 0.0, 0.0, 0.0),
}
}
fn update(&mut self, events: &engine3d::events::Events, player: &Player) {
let (dx, dy) = events.mouse_delta();
self.pitch += dy / 100.0;
self.pitch = self.pitch.clamp(-PI / 4.0, PI / 4.0);

self.yaw += dx / 100.0;
self.yaw = self.yaw.clamp(-PI / 4.0, PI / 4.0);
if events.key_pressed(KeyCode::Up) {
self.distance -= 0.5;
}
if events.key_pressed(KeyCode::Down) {
self.distance += 0.5;
}
self.player_pos = player.body.c;
self.player_rot = player.rot;
// TODO: when player moves, slightly move yaw towards zero
}
fn update_camera(&self, c: &mut engine3d::camera::Camera) {
// The camera should point at the player
c.target = self.player_pos;
// And rotated around the player's position and offset backwards
let camera_rot = self.player_rot
* Quat::from(cgmath::Euler::new(
));
let offset = camera_rot * Vec3::new(0.0, 0.0, -self.distance);
c.eye = self.player_pos + offset;
// To be fancy, we'd want to make the camera's eye to be an object in the world and whose rotation is locked to point towards the player, and whose distance from the player is locked, and so on---so we'd have player OR camera movements apply accelerations to the camera which could be "beaten" by collision.
}
}

#[derive(Clone, Debug)]
pub struct Marbles {
pub body: Vec<Sphere>,
pub velocity: Vec<Vec3>,
}

impl Marbles {
fn render(&self, rules: &GameData, igs: &mut InstanceGroups) {
igs.render_batch(
rules.marble_model,
self.body.iter().map(|body| engine3d::render::InstanceRaw {
model: (Mat4::from_translation(body.c.to_vec()) * Mat4::from_scale(body.r)).into(),
}),
);
}
fn integrate(&mut self) {
for vel in self.velocity.iter_mut() {
*vel += Vec3::new(0.0, -G, 0.0) * DT;
}
for (body, vel) in self.body.iter_mut().zip(self.velocity.iter()) {
body.c += vel * DT;
}
}
fn iter_mut(&mut self) -> impl Iterator<Item = (&mut Sphere, &mut Vec3)> {
self.body.iter_mut().zip(self.velocity.iter_mut())
}
}

#[derive(Clone, Copy, PartialEq, Debug)]
pub struct Wall {
pub body: Plane,
control: (i8, i8),
}

impl Wall {
fn render(&self, rules: &GameData, igs: &mut InstanceGroups) {
igs.render(
rules.wall_model,
engine3d::render::InstanceRaw {
model: (Mat4::from(cgmath::Quaternion::between_vectors(
Vec3::new(0.0, 1.0, 0.0),
self.body.n,
)) * Mat4::from_translation(Vec3::new(0.0, -0.025, 0.0))
* Mat4::from_nonuniform_scale(0.5, 0.05, 0.5))
.into(),
},
);
}

fn input(&mut self, events: &engine3d::events::Events) {
self.control.0 = if events.key_held(KeyCode::A) {
-1
} else if events.key_held(KeyCode::D) {
1
} else {
0
};
self.control.1 = if events.key_held(KeyCode::W) {
-1
} else if events.key_held(KeyCode::S) {
1
} else {
0
};
}
fn integrate(&mut self) {
self.body.n += Vec3::new(
self.control.0 as f32 * 0.4 * DT,
0.0,
self.control.1 as f32 * 0.4 * DT,
);
self.body.n = self.body.n.normalize();
}
}

struct Game<Cam: Camera> {
marbles: Marbles,
wall: Wall,
player: Player,
camera: Cam,
pm: Vec<collision::Contact<usize>>,
pw: Vec<collision::Contact<usize>>,
mm: Vec<collision::Contact<usize>>,
mw: Vec<collision::Contact<usize>>,
}
struct GameData {
marble_model: engine3d::assets::ModelRef,
wall_model: engine3d::assets::ModelRef,
player_model: engine3d::assets::ModelRef,
}

impl<C: Camera> engine3d::Game for Game<C> {
type StaticData = GameData;
fn start(engine: &mut Engine) -> (Self, Self::StaticData) {
use rand::Rng;
let wall = Wall {
body: Plane {
n: Vec3::new(0.0, 1.0, 0.0),
d: 0.0,
},
control: (0, 0),
};
let player = Player {
body: Sphere {
c: Pos3::new(0.0, 3.0, 0.0),
r: 0.3,
},
velocity: Vec3::zero(),
acc: Vec3::zero(),
omega: Vec3::zero(),
rot: Quat::new(1.0, 0.0, 0.0, 0.0),
};
let camera = C::new();
let marbles = Marbles {
body: (0..NUM_MARBLES)
.map(move |_x| {
let x = rng.gen_range(-5.0..5.0);
let y = rng.gen_range(1.0..5.0);
let z = rng.gen_range(-5.0..5.0);
let r = rng.gen_range(0.1..1.0);
Sphere {
c: Pos3::new(x, y, z),
r,
}
})
.collect::<Vec<_>>(),
velocity: vec![Vec3::zero(); NUM_MARBLES],
};
(
Self {
// camera_controller,
marbles,
wall,
player,
camera,
// TODO nice this up somehow
mm: vec![],
mw: vec![],
pm: vec![],
pw: vec![],
},
GameData {
wall_model,
marble_model,
player_model,
},
)
}
fn render(&mut self, rules: &Self::StaticData, assets: &engine3d::assets::Assets, igs: &mut InstanceGroups) {
self.wall.render(rules, igs);
self.marbles.render(rules, igs);
self.player.render(rules, igs);
// self.camera.render(rules, igs);
}
fn update(&mut self, _rules: &Self::StaticData, engine: &mut Engine) {
// dbg!(self.player.body);
// TODO update player acc with controls
// TODO update camera with controls/player movement
// TODO TODO show how spherecasting could work?  camera pseudo-entity collision check?  camera entity for real?
// self.camera_controller.update(engine);

self.player.acc = Vec3::zero();
if engine.events.key_held(KeyCode::W) {
self.player.acc.z = 1.0;
} else if engine.events.key_held(KeyCode::S) {
self.player.acc.z = -1.0;
}

if engine.events.key_held(KeyCode::A) {
self.player.acc.x = 1.0;
} else if engine.events.key_held(KeyCode::D) {
self.player.acc.x = -1.0;
}
if self.player.acc.magnitude2() > 1.0 {
self.player.acc = self.player.acc.normalize();
}

if engine.events.key_held(KeyCode::Q) {
self.player.omega = Vec3::unit_y();
} else if engine.events.key_held(KeyCode::E) {
self.player.omega = -Vec3::unit_y();
} else {
self.player.omega = Vec3::zero();
}

// orbit camera
self.camera.update(&engine.events, &self.player);

self.wall.integrate();
self.player.integrate();
self.marbles.integrate();
self.camera.integrate();

{
use rand::Rng;
for (body, vel) in self.marbles.iter_mut() {
if (body.c.distance(Pos3::new(0.0, 0.0, 0.0))) >= 40.0 {
body.c = Pos3::new(
rng.gen_range(-5.0..5.0),
rng.gen_range(1.0..5.0),
rng.gen_range(-5.0..5.0),
);
*vel = Vec3::zero();
}
}
}
self.mm.clear();
self.mw.clear();
self.pm.clear();
self.pw.clear();
let mut pb = [self.player.body];
let mut pv = [self.player.velocity];
collision::gather_contacts_ab(&pb, &self.marbles.body, &mut self.pm);
collision::gather_contacts_ab(&pb, &[self.wall.body], &mut self.pw);
collision::gather_contacts_ab(&self.marbles.body, &[self.wall.body], &mut self.mw);
collision::gather_contacts_aa(&self.marbles.body, &mut self.mm);
collision::restitute_dyn_stat(&mut pb, &mut pv, &[self.wall.body], &mut self.pw);
collision::restitute_dyn_stat(
&mut self.marbles.body,
&mut self.marbles.velocity,
&[self.wall.body],
&mut self.mw,
);
collision::restitute_dyns(
&mut self.marbles.body,
&mut self.marbles.velocity,
&mut self.mm,
);
collision::restitute_dyn_dyn(
&mut pb,
&mut pv,
&mut self.marbles.body,
&mut self.marbles.velocity,
&mut self.pm,
);
self.player.body = pb[0];
self.player.velocity = pv[0];

for collision::Contact { a: ma, .. } in self.mw.iter() {
// apply "friction" to marbles on the ground
self.marbles.velocity[*ma] *= 0.995;
}
for collision::Contact { a: pa, .. } in self.pw.iter() {
// apply "friction" to players on the ground
assert_eq!(*pa, 0);
self.player.velocity *= 0.98;
}

self.camera.update_camera(engine.camera_mut());
}
}

fn main() {
env_logger::init();
let title = env!("CARGO_PKG_NAME");
let window = winit::window::WindowBuilder::new().with_title(title);
run::<GameData, Game<FPCamera>>(window, std::path::Path::new("content"));
}
```

Let's dig in to a few aspects of `camera3d`'s `main.rs`. First let's just note that there is a `Camera` trait which describes methods that must be implemented by a game camera for the `camera3d` program. We'll get into the details on that later, but there are two implementors: `FPCamera` and `OrbitCamera`.

We'll jump to `main` next. You'll notice it's a very short function that just calls `engine3d::run` with some type parameters, a content path, and a window builder with parameters we control.

The next thing to read up on is these two types we pass into `run`: `GameData` and `Game`. `GameData` is just a struct we define with four model references in it; it serves a role similar to what `engine2d` called `Rules`. In a more complex game it would have level data and the like. `Game`, on the other hand (in this case parameterized further by a `Camera` type), has our dynamic game state: the marbles, the walls, the player, the camera, and collision contact Vecs. It also implements the trait `engine3d::Game` which means it must define `start` (which returns a `Game` and `GameData`), `render` (which issues rendering commands to the engine), and `update` (which updates game state). This is just a fancier version of what `engine2d` did.

Looking at the Player struct, it has a position (in its collision shape), a rotation, and linear and angular velocity. Its `integrate` method is about what we'd expect, but its `render` method is new. Instead of returning an `InstanceRaw` like we did last time, it receives an `&mut InstanceGroups` with a `render` method that takes a model and an `InstanceRaw`. `engine3d::render::InstanceGroups` is a new struct that gathers up instanced drawing commands issued for different models (either one instance at a time or in a larger batch of several at once) so that it can efficiently make batched drawing calls without knowing in advance all the models that might be drawn. Note the difference between `Marbles` and `Player` here.

Input handling is also different now: instead of using `winit_input_helper`, `engine3d::events::Events` provides queries for whether a key is pressed or where the mouse position is. This should help resolve issues we faced with unreliable `key_pressed` and `key_released` events.

Finally, the `OrbitCamera` and `FPCamera` structs are two example cameras. Change which one the game uses by modifying the call to `engine3d::run` at the end of the file.

## Today's Lab: Camera Cloning

Today's lab will have a reverse-engineering component and a relatively small programming component. We'll do it in triples.

First, work with two partners to find three game cameras that are interesting and that are different from the provided basic FPCamera and OrbitCamera examples. They should be games you can play and experiment with. They don't have to be 3D games but at least one should be.

Second, describe in fine detail how each these cameras work with respect to the camera's position and orientation, the character's movement, the player's control over the camera, and the larger environment they're in. This description should be sufficient to produce an implementation of the camera for someone who hasn't played the game you're talking about. Take note of what input data are necessary for these cameras and how they depend on context.

Finally, pick one of these camera types and implement it in the `camera3d` starter code. If it requires collision detection or ray casting, implement that too! Basic ray-plane and ray-sphere collision code is in chapter 5 of Real-Time Collision Detection and you can find code online as well. Efficiency isn't too much of a concern at this point. Be sure that you make necessary changes both to how the player moves and to how the camera moves! Player character, camera, and controls aren't really fully separable notions, so don't forget that they all contribute to a gameplay experience!