Lecture 5 — 2015-09-16
Sets, relations, and semantics
This lecture is written in literate Haskell; you can download the raw source.
Because I used it in the homework, I went over the newtype
keyword, which introduces a cost-free static distinction between types. It’s useful for making sure that people don’t confuse similar types, as in:
newtype Name = Name String
newtype Office = Office String
We can then safely define:
type Employee = (Name,Office)
without worrying about which comes first.
We can then pattern match on these as normal:
location :: Employee -> String
location (Name n, Office, o) = n ++ " -- " ++ o
We can sue record syntax to make pattern matching easier:
newtype Age = Age { getAge :: Int }
newtype Height = Height { getHeight :: Int }
render :: (Age,Height) -> String
render (a,h) = show (getAge a) ++ " (" ++ show feet ++ "' " ++ show inches ++ "\")"
where feet = h `div` 12
inches = h `mod` 12
Set theory
We discussed some more set theory, introducing the Cartesian product X × Y = { (x,y) | x ∈ X, y ∈ Y }.
We then used powersets and tuples together to define relations.
Given a set X, we can define binary relations on X as 2X × X, that is, as a set of pairs of elements of X.
We say x1 R x2—that is, that x1 and x2 are related—iff (x1, x2) ∈ R.
What are some examples of binary relations R ∈ 2X × X, i.e., R ⊆ X × X?
The empty relation, empty = ∅. Nothing is related to anything. A sad, lonely relation.
The identity relation, id = { (x, x) | x ∈ X }. This relation is reflexive, and nothing else. This relation can be seen as equality: x id y iff x = y.
The total relation, total = { (x, y) | x, y ∈ X }. Everything is related to everything else.
Taking concrete sets, we can form more intuitive/useful relations.
- The predecessor relation, pred = { (n,n+1) | n ∈ ℕ }. Here x pred y iff x is the predecessor of y. So 0 pred 1, and 1 pred 2, but there is no x (in the naturals, ℕ) such that x pred
There’s another way to define relations: inductively, using inference rules. An inference rule is written in the form:
premise1 premise2 ... premisen
------------------------------ Rule
conclusion
The way to read such a rule is top down: if you have every premise, then the rule Rule
gives you the conclusion. We write down derivations using inference rules as tree-like structures, where each premise is itself the conclusion of some other rule, and so on. A rule with no premises is called an axiom, and always holds.
For example, we defined the less-than-or-equal relation, lte, as follows:
------- Id
x lte x
--------- Succ
x lte x+1
x lte y y lte z
------------------ Trans
x lte z
We constructed a derivation showing that 0 lte 3, as follows:
------- Succ ------- Succ
1 lte 2 2 lte 3
------- Succ ---------------------------- Trans
0 lte 1 1 lte 3
------------------------------------------------- Trans
0 lte 3
Is it the case that x lte y iff x ≤ y?
I should add that binary relations don’t have to relate things in the same set. For example, we saw the relation isin ∈ 2Town × Country, such that Paris isin France but Paris isin Texas as well.
Functions, mathematically
A relation R is a function if: whenever x R y and x R z, then y = z.
For example, the pred relation is a function: if 1 pred y and 1 pred z, then we know that y = z = 2, and nothing else. The lte relation, on the other hand, is not a function: 1 lte 1 and 1 lte 2, but 1 ≠ 2.
We defined another relation in class, sumsto ∈ 2ℕ × ℕ × ℕ, such that sumsto = { (m,n,m+n) | m, n ∈ ℕ }. The sumsto relation is a function: if (m,n,o) and (m,n,p) are both in the sumsto relation, then o = p = m +n.
Rewrite semantics
With our newfound foundation in set theory, we revisited the rewrite semantics for arithmetic. Here’s the syntax from lecture 3.
m, n are Integers
e is an Expression ::=
n
| e1 plus e2
| e1 times e2
| negate e
And then we defined rewrite rules using → ∈ 2Expr × Expr. (I write → as ->
below.)
---------------- Plus
n plus m -> n+m
---------------- Times
n times m -> n*m
--------------- Negate
negate n -> -n
e1 -> e1'
------------------------- PlusLeft
e1 plus e2 -> e1' plus e2
e2 -> e2'
------------------------- PlusRight
e1 plus e2 -> e1 plus e2'
e1 -> e1'
--------------------------- TimesLeft
e1 times e2 -> e1' times e2
e2 -> e2'
--------------------------- TimesRight
e1 times e2 -> e1 times e2'
e -> e'
--------------------- NegateInner
negate e -> negate e'
The inference rules above define → as a relation on expressions. We have (e1,e2) ∈ → if we can construct a derivation. For example:
------------- Plus
2 plus 7 -> 9
------------------------------- PlusLeft
(2 plus 7) times 5 -> 9 times 5
Reflexive, transitive closure
Note that → tracks a single step of reduction. What if we want to talk about many steps of reduction?
We defined the reflexive, transitive closure operator, *. We can think of * as a function from relations to relations.
Suppose we have a relation R ∈ 2X × X, for some set X. We define R* as follows:
If (x,y) in R, then (x,y) is in R*.
For all x in X, (x,x) is in R. (That is, R is reflexive.)
If (x,y) and (y,z) in R, then (x,z) in R.
We can rephrase these as inference rules:
x R y
------ INCLUDE
X R* y
------ REFL
x R* x
x R* y y R* z
---------------- TRANS
x R* z
We talked in class about how the lte relation is equal to pred*.
We then saw that ->* related more things, so, as in:
(2 plus 7) times 5 -> 9 times 5 -> 45
so
(2 plus 7) times 5 ->* 45.
Resolving ambiguity
go over arithmetic rewrite rules write inference rules, explain as a relation defined by iff
do an easy derivation do an ambiguous derivation
(1 + 2) * (3 + 4)
write up unambiguous rules
Denotational semantics
compositionality
[[-]] : Expression -> Integer
[[n]] = n
[[e1 plus e2]] = [[e1] + [[e2]]
[[e1 times e2]] = [[e1] * [[e2]]
[[negate e]] = - [[e]]
Semantics for the Booleans
do a little boolean language and, or, not encode implies (a -> b iff not a or b)