# O: F \to answer # lambda(<[]command, F.S').(Value Answer V) # \Rightarrow: C x C (relation, not function, to allow nondeterminism) # . = cons (sort of.. we can do it to end of lists too) # & = append The rules for postfix are: #\Rightarrow # <(Qexec).Qrest, S> \Rightarrow# \Rightarrow #\Rightarrow #\Rightarrow #\Rightarrow (n != 0) # {arithmatic operations} #\Rightarrow or use a recursive inference rule: # \Rightarrow # #\Rightarrow # ----------------------------------------- # \Rightarrow Not all inference rules are good. For example: # \Rightarrow # ----------------------- #\Rightarrow In general, the antecedent should be "simpler" than the consequent Can't use =/\Rightarrow or \Rightarrow* in antecedents.. Postfix always terminates. But if we add "dup," it may not: "(dup exec) dup exec" will not terminate. But adding dup makes it turing universal. (in particular, <[dup exec], [(dup exec)]> \Rightarrow* <[dup exec], [(dup exec)]>). [09/14/99 01:04 PM] Lecture > Proofs about operational semantics >> Prooving that a language terminates Define an energy function of configurations. Then prove that energy is strictly decreasing with each transition, and that initial energy must be finite. >>> Energy Function # E_config \lsemantics\rsemantics = E_seq\lsemantics Q \rsemantics + E_stack\lsemantics S \rsemantics # # E_seq\lsemantics [] \rsemantics = 0 # E_seq\lsemantics C . Q \rsemantics = 1 + E_com\lsemantics C \rsemantics + E_seq\lsemantics Q \rsemantics # # E_com\lsemantics C \rsemantics = E_seq\lsemantics C \rsemantics (for any sequence C) # E_com\lsemantics C \rsemantics = 1 (for C not a sequence) # # E_stack\lsemantics N.S \rsemantics = 1+E_stack\lsemantics S \rsemantics # E_stack\lsemantics Q.S \rsemantics = E_seq\lsemantics Q \rsemantics + E_stack\lsemantics S \rsemantics # E_stack\lsemantics [] \rsemantics = 0 >>> Proof for command N #\Rightarrow # E() = 1+E(N)+E(Q)+E(S) = 2+E(Q)+E(S) = # 1+E(Q)+E(N.S) = 1 + E( ) >>> Proof for exec #\Rightarrow # E( ) = 1+E(exec)+E(Qrest)+E(Qexec)+E(S) # = 2+E(Qrest)+E(Qexec)+E(S) = 2+E(Qexec@Qrest)+E(S) # = 2+E( ) >>> Try proving for dup: # \Rightarrow # E() = 2+E(Q)+E(V)+E(S) # but: # E( ) = E(Q)+E(V)+E(V)+E(S) But note that the energy still decreases, as long as V is a number. Thus, we can safely duplicate numbers. Also, empty command sequences. Note also that we can duplicate any command sequence that doesn't contain "dup." To do so, simply make the energy of dup very high. Then getting rid of the dup will release enough energy that duplicating the sequence won't give more energy than dup. (the command sequence lengths that we can dup depend on the energy we assign to dup, but since we could give dup arbitrarily high energy, we can do this with arbitraily long command sequences). To do it formally, we could define a tuple-energyand define > if a>c or if a=c and b>d... etc. > Domains >> Primitive Domains The elements of the domain are explictly specified. E.g., {error}, {red, green, blue}, {..., -2, -1, 0, 1, 2, ...}. Subscripts can be used on values or variables to label the domain an element is in. >> Compound Domains (Product Domain) The cross product of two or more domains, i.e., the set of all pairs whose first element is from the 1st domain and 2nd element is from the 2nd domain. E.g., A = B x C x D, Configuration = Commands x Stack. Can use the tuple_compounddomain operator to construct elements. Or shorthand with angle brackets, e.g., \in (Colors x Bools) or \in A. >> Sum Domain The sum of multiple domains, i.e., the set of all elements of each of the summand domains. E.g., if A=Colors+Bools, then true \in A and orange \in A. In order to construct elements of a sum domain from a summand domain, use the Inj1_sumdomain or Inj2_sumdomain etc. Note that sum domains are "tagged," so we can determine which of the constituants it came from. So for example, if B = bool+bool, then we can tell Inj1(true) apart from Inj2(true). If the summand domains are all distinct, then we can use abbreviations for the Injection functions. For example, if Value=Intlit+Error, then Intlit \to Value is shorthand for Inj1_value. If we write "error," it is in the Error domain. If we want an error value, we write "(Error \to Value error). Extract from sum domains with matching: # matching_bool,D Eb # > (True \to Bool Ignore) | Et # > (False \to Bool Ignore) | Ef # endmatching This is essentially equivalant to "(if Eb Et Ef)". Note that "Ignore" is a bound variable.. Also, Eb must be in the bool domain, and Et and Ef must be in the D domain. >> Sequence Domains A sequence domain is basically an arbitrary number of products of the same domain. For example, Bool*, A*. []_Bool, [true]_Bool, [true, false, true]_Bool. Access the sequence domains with nullp, cons, and car. >> Arrow Domain A function from one domain to another domain. Constructor is lambda. For example, lambda x . x is in the domain bool \to bool (and other domains too, but each one is a different function). Each function takes exactly 1 argument. Another example: + : Intlit \to Intlit \to Intlit. Note that arrows right-associate, so this is equivalant to Intlit\to(Intlit \to Intlit). "+ 1 2" is equivalant to "((+ 1) 2)". (currying, after haskell). [09/15/99 03:23 PM] Recitation > Expression Language (EL) >> Basic Components of an SOS # < C, \Rightarrow, F, I, O > # C \in Configuration = Num-Exp # F \in Final = (map Numeral \to Num-Exp M) # O = lambda (Numeral \to Num-Exp M).M # I = (substitute M for input in N) >> Defining \Rightarrow First define \Rightarrow_B, since \Rightarrow is just defined on N's. # (Rop M1 M2) \Rightarrow_B a (where a=relate Rop M1 M2) # (Lop M1 M2) \Rightarrow_B a (where a=do_log Lop M1 M2) >>> Axioms # (A_op M1 M2) \Rightarrow a (where a=(calculate A_op M1 M2)) # (if true N1 N2) \Rightarrow N1 # (if false N1 N2) \Rightarrow N2 >>> Inference rules # N1 \Rightarrow N1' N2 \Rightarrow N2' # ------------------ ------------------ # (Aop N1 N2) \Rightarrow (Aop N1' N2) (Aop N1 N2) \Rightarrow (Aop N1 N2') # # B \Rightarrow_B B' # ------------------ # (if B N1 N2) \Rightarrow (if B' N1 N2) # # N1 \Rightarrow N1' N2 \Rightarrow N2' # ------------------ ------------------ # (Rop N1 N2) \Rightarrow_B (Rop N1' N2) (Rop N1 N2) \Rightarrow_B (Rop N1 N2') # # B1 \Rightarrow_B B1' B2 \Rightarrow B2' # ------------------ ------------------ # (Lop B1 B2) \Rightarrow_B (Lop B1' B2) (Lop B1 B2) \Rightarrow_B (Lop B1 B2') [09/21/99 01:02 PM] Lecture > Denotational Semantics # Syntactic Domains # | # | Meaning function # V # Semantic Domains Don't operationally give a meaning for a program fragment.. Instead, associate a "semantic value" for each program fragment. Then we can assert that 2 program fragments have the same semantic value. For each syntactic domain D, we can define a semantic value.. For postfix, we start with 2 semantic domains: answers, and (stack \to stack) transforms. Functions to map from syntactic domains to semantic domains: # P_sem: Program \to Answer # Q_sem: Commands \to (Stack \to Stack) # C_sem: Command \to (Stack \to Stack) # N_sem: Intlit \to Int # A_sem: Arithop \to (Int \to Int \to Answer) # R_sem: Relop \to (Int \to Int \to Bool) We want to be able to operate in the semantic domain. If + joins 2 phrases syntactically, and * joins 2 phrases semantically, and f maps syntax \to semantics, then we want: f(x1+x2) = f(x)*f(g) (homomorphism) # t \in Stack-Transform = Stack \to Stack # s \in Stack = Value* | Error # v \in Value = Int | Stack-Transform # a \in Answer = Value | Error # Error = {error} # i \in Int = {set of integers} # b \in Bool = {true, false} \lsemantics \rsemantics = "element of syntactic domain" (parse the given phrase). # P_sem\lsemantics (Q) \rsemantics = (top (Q_sem\lsemantics Q \rsemantics empty-stack)) # # Q_sem\lsemantics [] \rsemantics = (lambda (stack) stack) # Q_sem\lsemantics C.Q \rsemantics = (o Q_sem\lsemantics Q \rsemantics C_sem\lsemantics C \rsemantics) # # C_sem\lsemantics N \rsemantics = (push (Value \to Answer (Int \to Value N))) # C_sem\lsemantics pop \rsemantics = pop # C_sem\lsemantics swap \rsemantics = (lambda (stack) # ((push (top (pop stack))) # ((push (top stack)) (pop pop stack) # C_sem\lsemantics (Q) \rsemantics = (push (value \to answer (stack-transform \to value Q_sem\lsemantics Q \rsemantics))) # C_sem\lsemantics A \rsemantics (arithop A_sem\lsemantics A \rsemantics) # ((dup exec) dup exec) maps to bottom. Observational equivlance # X = (1plus) Under what circumstances will 2 sequences always give the same result when plugged in for ? For any two program fragments: Q1, Q2, they are operationally equivalant iff Q_sem\lsemantics Q1 \rsemantics=Q_sem\lsemantics Q2 \rsemantics. For example, [1 2 add] is operatonally equivalant to [3], and [(1 2 add)] is operationally equivalant to [(3)]. Semantic domains + helper functions form a "semantic algebra." [09/22/99 02:56 PM] Recitation > PS1a Comments Common problem: on problem 1, didn't mention what happened to domains. First, you have to define a new value domain, which contains a pair domain, etc. Also, syntax errors: # <> = tuples # [] = sequences # () = application Also injection functions. > Denotational Semantics >> Eta Reduction/Expansion We can eta-reduce things out (e.g., eta-reduce the stack out, and just deal with stack transforms. # (lambda (s) (f s)) = f >> "Elegant" Stack-Handling Capture common patterns of stack manipulation. We want to be able to assume for most of our denotational semantics that things will go ok: sweep error handling under the rug -- but make sure it's still there. ! With-stack-values handles the error cases of stack transforms for ! us. Basically, we give it a function that maps values to stacks, ! and it will check for error stacks for us. (c.f. with-stack-value) # with-stack-values: (Value* \to Stack) \to Stack-Transform [09/23/99 01:09 PM] Lecture > Fixed Points Examples: # FactGen = (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) # Even0Gen = (lambda (f) (lambda (n) (if (= n 0) 0 (f (- n 2))))) then FactGen(fact)=fact and Even0Gen(even0)=even0. Note that this definition is self-referential. Model theory creates a mapping between concrete models and mathematical consistancy... In Even0, it is defined for 0, 2, 4, 6, ... But for odd numbers? It can be anything: the functional definition doesn't specify. FactGen has one fixed point (excluding negatives): fact. Even0 has a countable number of fixed points. We need to be able to select one fixed point. There is a function, fix, which will give you the fixed point of a function. E.g.: # fix(FactGen) = fact # fix(Even0) = {<0,0>, <2,0>, <4,0>, ...} But we want fix to give us "bottom" where even0 is undefined. Whether or not a function has a fixed point (& the # of fixed points) depends on the domain of the functions. # function # f(x) = 1-x 1 fixed point: 1/2 # f(x) = 4/x 2 fixed points: 2, -2 # f(x) = x infinite fixed points Consider continuous functions on the unit interval. Then it has a fixed point: # | / # f(x) | / # | / # |/____ # x Something will have a fixed point if it crosses the line f(x)=x. Because a continuous function on the unit interval must cross this line, it must have a fixed point. But discontinuous funciton doesn't have to have a fixed point. >> Partial order It is reflexive, antisemetric, and transitive # a weaker-than a # if a weaker-than b, b weaker-than a, then a=b # if a weaker-than b, b weaker-than c, then a weaker-than c Use symbol \sqsubseteq for weaker-than. >> Least Upper Bounds Define a least upper bound on the amount of information in a set. Set S be a subset of D. If D contains an element a s.t. for all x in S, x\sqsubseteq a, then a is the LUB of D... Let S \in D LUB(S) = x \in D s.t. 1. \forall y \in S, y \sqsubseteq x 2. \forall z \in D, (\forall y \in S, y \sqsubseteq z) \to (x \sqsubseteq z) >> Information Content How can we define the information content of a function? For two functions, f and g: X \to Y - f \sqsubseteq g if \forall x \in X f(x) \sqsubseteq g(x) Information content: # 1 2 # \backslash / # bot (bot is written \bot) Functions: # g1 g2 g1(1)=1, g1(2)=1; g2(1)=2, g2(2)=2 # \backslash / # f1 f1(1)=\bot, f1(2)=\bot etc. Usually, F(\bot)=\bot. # f0 = {} (all elided values map to bottom) # f1 = FactGen f0 = {<0,1>} # f2 = Factgen f1 = {<0,1>, <1,1>} Thus, f_n \sqsubseteq f_n+1. Fixed point is the lim(n \to inf) f_n. FactGen: (Nat \to Nat_bot) \to (Nat \to Nat_bot) >>> Definitions: - Chain = a totally ordered subset of a partial order. - Complete partial order = every chain C in D has a least uppper bound in D (so LUB(C) exists) (aka CPO) - Pointed Complete Partial Order is a CPO with a least element. - Monotonic: if d1, d2 \in D, then if d1 \sqsubseteq d2 then f(d1) \sqsubseteq f(d2) I.e., if you give a function less information, the function's output must be weakened. - Continuity: f:D \to E, D and E are CPOs. Then for all chains S \in D F(LUB_D S) = LUB_E {f(x)|x \in S} Continuity implies monotonicity. If there are many fixed points, pick the one with the least information content. Continuity implies monotonicity: # C = {d1, d2} f:D \to E, d1 \sqsubseteq d2 # (f (LUB C)) = LUB{f(c)|c \in C}. Let C={d1, d2} # (f d2) = LUB({f(d1), f(d2)}), so f(d1) \sqsubseteq f(d2) Rules should guarantee monotonic and continuity. Consider halt = (lambda (x) (= x \bot)).. It's neither monotonic nor continuous. Consider always_false, which returns false for anything (even \bot). This is at least monotonic. Continuous? Monotonicity and continuity guarantee that the fixed point operator will converge... Fixed Point Theorem: If D is a pointed CPO, then a continuous function f:D \to D has a fixed point that is least: # fix(f) = LUB{f^n(\bot) | n>=0} \bot \sqsubseteq f(\bot) f^n \bot \sqsubseteq f^(n+1) \bot 1. Show that fix f is a fixed point of f: # f(fix f) # = f(LUB {f^n(\bot)|n>=0}) # = LUB {f^n(\bot)|n>=1} # = LUB {f^n(\bot)|n>=6} # = fix(f) 2. Prove that it is least fixed point in D (by contradiction). # Assume d is the least # d = f(d) # \bot \sqsubseteq d # f(\bot) \sqsubseteq f(d) # f^n(\bot) \sqsubseteq d # LUB {f^n(\bot)|n>=0} \sqsubseteq d # fix(f) \sqsubseteq d Assume A, B are CPOs. We should show that A+B, AxB, A*, and A \to B are CPOs. Especially difficult part of proof is that D=D \to D is a CPO... Define fix: # fix = (lambda f (lambda (x) (f (x x))) (lambda (x) (f (x x)))) [09/28/99 01:02 PM] Lecture > Functional Language (FL) - Every statement or expression has a value - Two pieces: FLK (kernel), and FL. - We can desugar FL \to FLK. >> Examples - Dynamically Typed: (if 0 #t #f) = error - Normal order (lazy): (f E_bot) - Implicitly curried: (f 1 2) = ((f 1) 2) # (proc x (primop * x x)) = square # (pair (primop not? #f) (primop / 1 0)) \to pair with #t in left branch.. If we eval right branch, we'll get an error # (call (proc x (call x x)) (proc x (call x x))) \to \bot factorial: # (rec fact (proc n (if (primop = n 0) 1 # (primop * n (call fact (primop - n 1)))))) # # (rec l (pair 1 (pair 2 l))) \to (1 2 1 2 1 2...) # # (rec x x) \to bottom >> Syntax of FLK E::= ; expressions - (primop P E*) ; primitives (26 P's) - (proc I E) - (call E1 E2) - I ; identifiers - (if E1 E2 E3) - L ; literals - (pair E1 E2) ; non-strict (lazy) - (rec I E) ; bind E to I in E, returns E... L::= ; literals - #u - #f - #t - N - (symbol I) (rec I E) is analagous to (fix (proc I E)) >> Syntax of FL E::= - FLK expressions - (lambda (I*) E) - (E1 E*) - (list E*) - (quote S*) - (cond (Epredicate Econsequent)* (else Ed)?) - (and E*) - (or E*) - (let ((I E)*) E) - (letrec ((I E)*) E) S::= - L - (S*) In FL, there are also lots of pre-bound functions. >> Desugaring from FL \Rightarrow FLK # D_exp \lsemantics (lambda () E) \rsemantics = (proc I_fresh D_exp\lsemantics E \rsemantics) # D_exp \lsemantics (E) \rsemantics = (call D_exp\lsemantics E \rsemantics #u) # D_exp \lsemantics (lambda (I1 I+) E) \rsemantics = (proc I1 (proc D_exp\lsemantics (lambda (I+) E) \rsemantics)) # D_exp \lsemantics (E1 E2 E3+) \rsemantics = D\lsemantics ((call D\lsemantics E1 \rsemantics D\lsemantics E2 \rsemantics) E3+) \rsemantics # D_exp \lsemantics (letrec ((I1 E1) ... (In En)) Ed) \rsemantics = # (let ((Iouter (rec Iinner # (let ((I1 (nth Iinner 1)) ... (In (nth Iinner n))) # (list E1 ... En))))) # (let ((I1 (nth Iouter 1)) ... (In (nth Iouter n))) # Ed)) >> Variables and Identifiers Variable - value bound by proc or rec Identifier - symbol that stands for a variable Free Identifier - An identifier that is not bound by an enclosing proc or rec. Bound Identifier - An identier that is named in an enclosing proc or rec We can define corresponding functions Free-Ids\lsemantics E \rsemantics and Bound-Ids\lsemantics E \rsemantics >>> Variable Capture Occurs when we alpha-rename, and a variable's binding changes: # (proc a (proc b (call a c))) # Renaming a to c causes problem (external capture) # Renaming a to b causes problem (internal capture) >> Substitution Operator: # [E_1 / I] E_2 This operator says to replace all free I's in E_2 with E_1. (read "substitute E_1 for free I in E_2") # [E1/I] (if E2 E3 E4) \Rightarrow (if [E1/I]E2 [E1/I]E3 [E1/I]E4) # [E1/I] (proc I E2) \Rightarrow (proc I E2) # [E1/I'] (proc I E2) \Rightarrow (proc Ifresh [E1/I']([Ifresh/I]E2)) >> SOS for FLK # < Exp, \Rightarrow, Val-Exp, I, O > # # I = identity # O = (lambda (V) (alpha-class V)) # Val-Exp = L | (pair a b) | {(proc I E)} | I # # (call (proc I1 E1) E2) \Rightarrow [E2/I1] E1 # # E1 \to E1' # ------------------- # (if E1 E2 E3) \Rightarrow (if E1' E2 E3) # # (if #t E2 E3) \Rightarrow E2 # (if #f E2 E3) \Rightarrow E3 # # (rec I E) \Rightarrow [(rec I E)/I] E # (e.g., (rec l (pair 1 (pair 2 l))) \Rightarrow (pair 1 (pair 2 (rec l ..))) [09/29/99 03:02 PM] Recitation > Fixed Points >> Least Fixed Point Theorem Says that fix(D) exists for D which obey certain conditions. We need to make sure that the least upper bound exists. E.g., if we were trying to find f(x) by successive approximations with less information than f(x), we need to make sure that f(x) exists in our domain.. Pointedness gives us a "starting point" (bottom). We can make any CPO pointed by just adding "bottom" to it. D \to E: continuous (i.e., preserves least upper bounds) - Take a chain C \subset D - f(LUB_D C)=LUB_E f(C) Then fix_D(f) = LUB_D f^n(\bot) >> Operations on CPOs +, x, bot, \to, *. Assume D, E are CPOs * D x E is a CPO with "product ordering" * To make D+E a PCPO, we need to create a new bottom. >> Desugaring letrec (again) # Church list: # (lambda (I_reciever) # (I_reciever I1...In)) [10/05/99 01:10 PM] Lecture > Dynamic Environments # (let ((a 1)) # (let ((f (lambda (x) (+ a x)))) # (let ((a 20)) # (f 300)))) For static scoping, answer = 301. For dynamic scoping, answer = 320. Text desugars to: # (call (proc a # (call (proc f # (call (proc a (call f 300)) # 20)) ; let a=20 # (proc x (primop + a x)))) ; let f=(lambda ..) # 1) In dynamic scoping, free identifiers are defined by caller, not callee.. Problematic, since calle can't preserve invariants. Callee basically can't preserve invariants. Useful, since you can redefine variables that callee will use, e.g., redefine sqrt, or redefine input stream. But hard to tell what will or won't break the callee. Advantage over using a single global variable: it will automatically undo your changes once you leave your environment. Not used very often. From static scoping: # p \in Procedure = Denotable \to Computation # E\lsemantics (proc I E) \rsemantics = (lambda (e) # (val-to-comp (Procedure \to Value # (lambda (d) E\lsemantics E \rsemantics [I:d]e)))) # E\lsemantics (call E1 E2) = (lambda (e) (with-procedure (E\lsemantics E1 \rsemantics e)) # (lambda (p) (p (E\lsemantics E2 \rsemantics e)))) For dynamic scoping: # p \in Procedure = Denotable \to Environment \to Computation # E\lsemantics (proc I E) \rsemantics = (lambda (e_def) ; e_def gets ignored # (val-to-comp # (Procedure \to Value # (lambda (d e_call) E\lsemantics E \rsemantics e_call)))) # E\lsemantics (call E1 E2) = (lambda (e) (with-procedure (E\lsemantics E1 \rsemantics e)) # (lambda (p) (p (E\lsemantics E2 \rsemantics e) e))) Variable lookups in dynamically scoped languages very expensive: since you don't know anything about the environment you're getting, you have to search the environment to find each variable. :( Static: # E\lsemantics (call E_proca1 1) \rsemantics e_0 # \uparrow # E\lsemantics (call E_procf E_procx) \rsemantics [a_i] \to 1 \longleftarrow # \uparrow \backslash # E\lsemantics (call E_proca2 20) \rsemantics [f_i] \to (lambda (d) (+ x a) e[a/*]) # \uparrow # E\lsemantics (call f 300) \rsemantics [a_i] \to 20 Dynamic: # E\lsemantics (call E_proca1 1) \rsemantics e_0 # \uparrow # E\lsemantics (call E_procf E_procx) \rsemantics [a_i] \to 1 # \uparrow # E\lsemantics (call E_proca2 20) \rsemantics [f_i] \to (lambda (d e) (+ x a) e) # \uparrow \uparrow # E\lsemantics (call f 300) \rsemantics [a_i] \to 20 __\nearrow > Packaged Environments 4 primitives: record, selectt, override, conceal. Everything else is sugar. >> records Create a data structure with name/value pairs: # (define joe (record (age 23) (male #t))) # (select age joe) >>> records as environments We can use joe as an environment (or a partial environment): # (with (age male) joe # (+ 10 age)) Desugars to: # (with (I1 I2 ... In) E1 E2) # \Rightarrow # (let ((I E1)) # (let ((I1 (select I1 I)) # (I2 (select I2 I)) # ... # (In (select In I))) # E2)) If we don't specify I1...In, it's very hard to tell which variables the with expression is binding. To package x,y,z into an environment: # (record (x x) (y y) (z z)) >> modules What if we want an environment with mutually recursive bindings? # (module (I1 E1) ... (In En)) # \Rightarrow # (letrec ((I1 E1) ... (In En)) (record (I1 I1) ... (In In))) >> Misc primitives >>> override Appned 2 records, giving precedence to E2. Gives subclassing. # (override E1 E2) # (override joe (record (age 32))) >>> conceal Makess certain values disappear -- not accessable from the outside anymore. Allows us to encapsulate variables, protect abstraction barriers. # (conceal (I1 ... In) E) >> Example # (define make-point # (lambda (x y) # (module (x x) (y y) # (rho (sqrt (+ (* x x) (* y y)))) (theta (atan y x))))) # (define p (make-point 1 2)) # (with (rho theta) p (* rho rho)) [10/06/99 03:14 PM] Recitation > Naming >> Parameter Passing What do names mean? >>> Call-by-name (call-by-need?) You can name arbitrary computations (e.g., bottom and errors) >>> Call-by-value You can only name values >>> Call-by-denotation Denotable = Environment \to Computation. >>> Strictness >> (Hierarchical) Scoping How do you organize your names? What names are visible, and how do you decide which name shadows which? >>> Static >>> Dynamic >> Non-Hierarchical Scoping Packaging up bunches of names and using them, denoting them. >>> Records >>> Modules >>> OOP >> Bizzare Naming >>> Multiple namespaces >>> Mixed scoping > Object Oriented Programming >> HooPLA E ::= - L - I - (method M (Iself Iformal*) E) - (object-compose E1 E2) - (null-object) - (send M Eobj Earg*) - (object E*) - (class (Iinit*) Einstance*) Desugaring of class: # D\lsemantics (class (Iinit*) E*) \rsemantics = # (method make (Iignore Init*) (object E*)) Example: # (define point (class (init-x init-y) # (method x (self) init-x) # (method y (self) init-y) # (method move (self dx dy) # (object (send make point (send + (send x self) dx) # (send + (send y self) dy)) # self)))) Example: # (define color (class clr) # (method color (self) clr) # (method new-color (self new) (object (send make color new) # self))) When we make a new point, point doesn't know about all the methods you support. For example, consider: # (define colored-point (class (x y col) # (send make point x y) # (send make color col))) Then if we call the move method on a colored point, we'll get back a colored point! This happens because the (object (send ..) self) clause combines the new methods (send ...) with the old ones (self). > State # Store = Location \to Assignment # Location = Nat # Assignment = (Storable | Unbound)_\bot # Storable = Value {depends on language} Keep E as: # E = Exp \to Envvironment \to Computation But we have to redefine Computation: # Computation = Store \to Store x Expressible # Expressible = (Value | Error)_\bot # Value = Int + Bool + Procedure + Location New types of E: - (cell E) - (primop cell-set! E1 E2) - (primop cell-get E1) # E\lsemantics (cell E) = (lambda (e) (with-value E\lsemantics E \rsemantics e) # (lambda (v) # (allocating v # (lambda (l) # (val-to-comp (Location \to Value l)))))) Define with-value: # with-value: Computation \to (Value \to Computation) \to Computation # = Computation \to (Value \to Computation) \to Store \to Store x Expressible # # (lambda (c) (lambda (f) (lambda (s) # (matching (c s) # ( ((f v) s1)) # ( ) # )))) Define allocating: # allocating: Storable \to (Location \to Computation) \to Computation # # (lambda (storable) (lambda (f) (lambda (s) # ((f (fresh-loc s)) (assign (fresh-loc s) storable s))))) Define sequence, which evaluates 1st argument, ignores its value, and then evaluates second argument. # E\lsemantics (sequence E1 E2) \rsemantics = # (lambda (e) (with-value (E\lsemantics E1 \rsemantics e) # (lambda (v) (E\lsemantics E2 \rsemantics e)))) > FLAVAR! # (set! I E) E.g.: # (let ((a 0)) (f (lambda (x) (+ x x)))) # (f (begin (set! a (+ a 1)) a)) - Call by value: 2 - Call by name: 3 - Call by need: 2 (Call by need: like call by name, but keep track of whether we've evaluated it yet.. if we have, then just use old value (memoizing)) # (let ((a 0) # (double (lambda (in out) (set! out (+ in in))))) # (begin # (double 17 a) # (+ a 1))) - Call by reference: 35 # Denotable = Location # Storable = Value {CBV or CBRef} # Storable = Computation {CBN} # E\lsemantics I \rsemantics = (lambda (e) (with-denotable (lookup e I) # (lambda (l) (fetching l val-to-comp)))) [10/12/99 01:12 PM] Lecture > Standard Semantics The standard way to describe the semantics of a programming language. Use continuations to model control transfers. SOS is good with parallelism, multi-processing Denotational Semantics good with non-local control transfer So far, we model state with: # E: Exp \to Environment \to Store \to Where we called "Store\to Direct Semantics (FL & Mutation) - Mutible Cells, Store - FLAVAR! (and set!) - Parameter Passing Mechanism > Standard Semantics - Continuations - Valuation Clauses - Helper Functions - Control Problem - Expcont & Cmdcont We can now give values to errors: # E: Exp \to Environment \to Expcont \to Store \to Expressible # Exprcont = Value \to Store \to Expressible # # E\lsemantics (error Y) \rsemantics = # (lambda (e k) (error-cont Y\lsemantics Y \rsemantics)) Valuation for if: # Expcont = Value \to Cmdcont # Cmdcont = Store \to Answer # Computation = Expcont \to Cmdcont # # E\lsemantics (if E1 E2 E3) \rsemantics = # (lambda (e k) ((E\lsemantics E1 \rsemantics e) # (lambda (v) # (matching (v) # ((bool \to value true) (E\lsemantics E2 \rsemantics e k)) # ((bool \to value false) (E\lsemantics E3 \rsemantics e k)) # (_ (error-cont 'non-boolean)))))) >> Example with Continuations Define a new language: FLK! | (loop E) | (jump) | (exit E) Sample program: # (let ((c (cell 0))) # (loop # (begin (cell-set! c (+ (cell-ref c) 1)) # (if (> (cell-ref c) 10) # (exit (cell-ref c)) # (jump))))) # ; \Rightarrow 11 The problem: define valuation functions for loop and exit.. First, change the signature of E: # E: Exp \to Environment \to Exitcont \to # Jumpcont \to Expcont \to Store \to Expressible # j \in Jumpcont = Cmdcont # w \in Exitcont = Expcont # CmdCont = Store \to Expressible # # E\lsemantics (jump) \rsemantics = (lambda (e w j k) j) # # E\lsemantics (exit E) \rsemantics = # (lambda (e w j k) (E\lsemantics E \rsemantics e w j w)) # # E\lsemantics (loop E) \rsemantics = # (lambda (e w j k) # (fix_cmdcont (lambda j (E\lsemantics E \rsemantics e k j k)))) # ; ^ # ; This k is only used if we don't exit via exit. Next problem: Show that E\lsemantics (loop (jump)) \rsemantics = (lambda (e w j k) bot_cmdcont). Plug E\lsemantics (jump) \rsemantics into our value for E\lsemantics (loop (jump)) \rsemantics] and we get: # (fix_cmdcont (lambda j ((lambda (e w j k) j) e k j k)))) = # (fix_cmdcont (lambda j j)) [10/14/99 01:08 PM] Lecture !! Midterm review !! !! Sun 10/17 7-9pm !! !! Room: !! > Control Modelling non-local control transfers. >> Label and Jump Define a new FLK-based language: # E ::= ... | (label I E) | (jump E1 E2) Examples: # (label n # (+ (jump n 2) 2)) \Rightarrow 2 # (let ((p (lambda (c v) # (jump c (+ 1 v))))) # (label done (p done 2)) Note in this example that the jump is called where it's not statically surrounded by a label -- jumps are dynamic. # (label x (+ 1 (label x (jump x 0)))) \Rightarrow 1 Add a label (ControlPoint) to the value domain: # Value = ... + ControlPoint # ControlPoint = Value \to Store \to Expressible = Expcont # E\lsemantics (label I E) \rsemantics = # (lambda (e k) (E\lsemantics E \rsemantics [I:(ControlPoint \to Value k)]_e k)) # E\lsemantics (jump E1 E2) \rsemantics = # (lambda (e k) ((E\lsemantics E1 \rsemantics e) # (test-control-point # (lambda (k2) (E\lsemantics E2 \rsemantics e k2)))) Now consider the example: # (jump (label x x) (label y y)) # E\lsemantics (label x x) \rsemantics = (lambda (e k) (k (ControlPoint \to Value k))) # # (jump (label x x) E) = (jump E E) # (jump (label x x) (label y y)) = (jump (label y y) (label y y)) # =\bot >> Exceptions Define a new FLK-based language: # E := ... | (raise I E) | (trap I E1 E2) - (raise I E) raises an exception named I.. - (trap I E1 E2) Evaluates E2, trapping an exception named I. Value of the exception is passed to the procedure E1, and the return value of E1 determines the value of raising the exception. # (let ((p (lambda (x) (raise out x)))) # (trap out (lambda (y) (+ 1 y)) # (p 0))) # (trap a (lambda (x) 3) # (trap b (lambda (x) (raise a 5))) # (trap a (lambda (x) 7) # (raise b 11))) Trap handlers are dynamically scoped, so this evaulates to 7, not 3. We need to redefine computation: # E\lsemantics \rsemantics : Exp \to Environment \to Computation # Computation = Handler-Env \to ExpCont \to Cmdcont # w \in HandlerEnv = Identifier \to Procedure # Procedure = Denotable \to Computation # Empty_Handler = (lambda (I d) (error-to-comp I)) # E\lsemantics (trap I E1 E2) \rsemantics = # (lambda (e w k) # (E\lsemantics E1 \rsemantics e w (test-procedure # (lambda (p) # (E\lsemantics E2 \rsemantics e (extend-handlers w I p) k))))) # # E\lsemantics (raise I E) \rsemantics = # (lambda (e w k) (E\lsemantics E \rsemantics e w (lambda (v) ((w I) v w k)))) # # E\lsemantics (call E1 E2) \rsemantics = # (lambda (e w k) (E\lsemantics E1 \rsemantics e w # (test-procedure (lambda (p) # (E\lsemantics E2 \rsemantics e w (lambda (v) (p v w k))))) # # E\lsemantics (proc I E) \rsemantics = # (lambda (e w k) # (k (Proc \to Value (lambda (v w' k') (E\lsemantics E \rsemantics [I:v]_e w' k'))))) Termination semantics: Change it so trap always returns to its continuation, even if E1 tries to return to some other continuation. Call this new version handle. # E\lsemantics (handle I E1 E2) \rsemantics = # (lambda (e w k) # (E\lsemantics E1 \rsemantics e w (test-procedure # (lambda (p) # (E\lsemantics E2 \rsemantics e (extend-handlers w I # (lambda (v2 w2 k2) (p v2 w2 k)) k)))))) We can do this with sugar instead: # D\lsemantics (handle I E1 E2) \rsemantics = # (label I1 (trap I (proc I2 (jump I1 (D\lsemantics E1 \rsemantics I2))) # (jump I1 (D\lsemantics E2 \rsemantics)))) > Continuation Passing Style A source-to-source translation that ensures that no procedure ever returns. Thus, we don't need control stacks.. # C: FLK! \to FLK!CPS # Top: FLK! \to FLK!CPS # Top\lsemantics E \rsemantics = (call C\lsemantics E \rsemantics (proc x x)) # C\lsemantics L \rsemantics = (proc k (call k L)) # C\lsemantics I \rsemantics = (proc k (call k I)) # C\lsemantics (primop P E) \rsemantics = # (proc k (call C\lsemantics E \rsemantics (proc v (call k (primop P v))))) # C\lsemantics (proc I E) \rsemantics = # (proc k1 (call k1 (proc I (proc k2 (call C\lsemantics E \rsemantics k2))))) # C\lsemantics (call E1 E2) \rsemantics = # (proc k # (call C\lsemantics E1 \rsemantics # (proc v1 (call C\lsemantics E2 \rsemantics (proc v2 (call (call v1 v2) k))))) [10/19/99 01:08 PM] Lecture > Explicit Types Define type as a set of values? A description of a value? An approximation of a value (i.e., say that the type has less information content than any of the elements of the type...??) >> Terminology We can prove that expression E has type T. \vdash E : T We can also say that with respect to a type environment A, expression E has type T. A \vdash E : T A[x:int] \vdash x : A A[x:int] \vdash x \to x: A \to A So "\vdash" = provable; ":" = has type; "A" = type environment. >> Overall claims of typing Different factions about types: # [ all ascii char strings ] # [ ] # [ [ syntactically well-formed ] ] # [ [ ] ] # [ [ [ no run-time errors ] ] ] # [ [ [ ] ] ] # [ [ [ [ correct answer ] ] ] ] # [ [ [ [ ] ] ] ] # [ [ [ [ ] ] ] ] Type people make the following assertion: well-typed programs is contained in programs with no run-type errors. Also, it intersects the correct-answer space. Also, well-typed programs are more likely to give correct answers..? >> Scheme/X Scheme with explicit types. We want to catch run-time typing errors: - applying a non-proc - if on a non-boolean - applying a primop to wrong types. - correct # of arguments to procedures - all parameters must be of correct type (?) Sound type system: prove with operational semantics and typing rules that certain classes of errors cannot occur on well-typed expressions. Our typings sytem: T := int | bool | sym | (\to (T1 .. Tn) Tr) | unit Change the syntax of lambdas: # (lambda ((x xtype) (y ytype) (z ztype)) body) Define axioms of the form: # \vdash 1:int # \vdash #t:bool # \vdash (lambda ((x int) (+1 x))) : (\to (int) int) # \vdash (lambda ((x int)) (lambda ((y int)) (+ x y))) : # (\to (int) (\to (int) int)) >> Typing rules # A[I:T] \vdash I:T [var] # A[I1:T1, ..., In:Tn] \vdash Eb : Tb # --------------------------------------- [lambda] # A \vdash (lambda (I1 T1) ... (In Tn) Eb) : (\to (T1 .. Tn) Tb) # A \vdash Ei:Ti 1\leq i\leq n ; A \vdash E0: (\to (T1 .. Tn) Tr) # --------------------------------------- [app] # A \vdash (E0 E1 ... En): Tr # A | E1: bool ; A | E2: T ; A | E3: T # --------------------------------------- [if] # A \vdash (if E1 E2 E3) : T We can simply look at a form and figure out its type. For example, for any A, it's true that: # A[x:int] \vdash 1: int [intlit] # A[x:int] \vdash x: int [var] # A[x:int] \vdash +: (\to (int int) int) [var] # A[x:int] \vdash (+ 1 x): int [app] # A \vdash 1:int [intlit] # A \vdash (lambda ((x int)) (+ 1 x)): (\to (int) int) [lambda] # A \vdash ((lambda ((x int)) (+ 1 x)) 1): int [app] So to do typechecking, just apply whatever rule applies to the expression.. Recursively check all the subexpressions.. Will find the type of the program, if it has one. Is it possible to do type checking with dynamic scoping?? E.g., typing on exceptions? > Subtyping >> one-of (sum type) Examples: # (define-type nlights (one-of (car int) (bike bool))) # \vdash (one nlights car 4) : nlights # \vdash (one nlights bike #t) : nlights # \vdash (lambda ((x nlights)) # (tagcase x # (car lights lights) # (bike b (if b 1 0)))): (\to (nlights) int) # A \vdash E:T ; \exists Ii = I and T=Ti # --------------------------------------------- # A \vdash (one (one-of ((I1 Tn) .. (In Tn))) I E): # (one-of (I1 T1) .. (In Tn)) Note that "define-type" is basically just a macro facility that replaces nlights with (one-of ...); thus we can't tell apart 2 types with the same def & different names... >> pair-of (product type) >> record-of (named product type) >> rec-of (recursive types) Examples: # (pair-of int (rec-of t (pair-of bool (pair-of int t)))) # (rec-of t (pair-of int (pair-of bool t))) How do we know if two rec-of types are equivalant?? (rec-of I T) = [(rec-of I T) I] T # (define-type tree # (rec-of t (oneof (leaf int)) # (inner (record-of # (left t) # (right t))))) # # (one tree inner (record (left (one tree leaf 1)) # (right (one tree leaf 2)))) # (Is this really a tree, or just a generic directed graph? Can't we define "trees" that aren't trees? :) ) >> Subtyping "T1 \sqsubseteq T2" means T1 is a sybtype of T2. I.e., we can use a T1 where a T2 is expected. Depending on our implementation, either or both of these may be true: # (record-of (age int) (member bool)) \sqsubseteq (record-of (age int)) # (record-of (age int) (member bool)) \sqsubseteq (record-of (member bool)) # (one-of (car int)) \sqsubseteq (one-of (car int) (bike bool)) Mutable things can't be sybtyped. E.g., (cell-of E) isn't a subtype of anything (except itself). # Tr \sqsubseteq Tr'; Ti' \sqsubseteq Ti # ------------------------------------------ # (\to (T1 .. Tn) Tr) \sqsubseteq (\to (T1' .. Tn') Tr') [10/26/99 01:14 PM] Lecture > Type Reconstruction Questions: - How do you know the types are correct? - What classes of errors are guaranteed not to occur? - Is it guaranteed to discover the types of any well-typed program? Examples: # \vdash (lambda (x) (+ x 1)) : (\to (int) int) # \vdash (lambda (x) ((x 1) 2)) : # (\to (\to int (\to int t)) t) # \vdash (lambda (x) (+ 1 ((x 1) 2))) : # (\to (\to int (\to int int)) int) Define Scheme/R: # P :: (program Ebody (define I E)*) # E :: (E E*) | I | (lambda (I*) E) | # (if E1 E2 E3) | L | (let ((I E)*) E) | # (letrec ((I E)*) E) Type saftey is required for automatic storage management (garbage collection); otherwise, we can't tell what's a pointer and what's a not. >> Typing Rules Typing rules tell us whether a particular program is well-typed. The typing rules for Scheme/R are basically just the rules given in [Explicit Types/Typing Rules] # A[I1:T1, ..., In:Tn] \vdash Eb : Tb # --------------------------------------- [lambda] # A \vdash (lambda (I1 In) Eb) : (\to (T1 .. Tn) Tb) How do we come up with the types of the arguments? Any well-typed rule won't give you the class of errors discussed above. >> Unification Define substitution functions: # TypeVar = ?v1, ?v2, \ldots # TypeLit = IntType, BoolType, \ldots # t \in Type = TypeVar + TypeLit # S \in Substitution = Type \to Type Define a unify operator U. The unification operator takes two types, and tries to coerse them to be the same type. If this is possible, then return S extended with enough bindings to make the two types equivalant. # S' = U(t1, t2, S) Therefore, if S'=U(t1, t2, S), then we should have: # S' t1 \equiv S' t2 # S \sqsubseteq S' For example: # U((?x ?y), (?y int), {\o}) # \Rightarrow ?x=int ?y=int # U[(\to (?x) (\to (?x) ?y)), (\to (int) ?z), {\o}] # \Rightarrow ?x=int, ?z=(\to (int) ?y) >> Reconstruction Define a reconstruction function, R: # R\lsemantics E \rsemantics = Bindings \to Substitution \to The type reconstruction function takes an expression, a set of bindings (from identifiers to types), and a set of substitutions. It then determines the type of that expression, assuming the given bindings and substitutions. # R\lsemantics E \rsemantics A S0 = A type reconstruction function R is sound iff: # R\lsemantics E \rsemantics A S0 = \Rightarrow S A \vdash E : S T A type reconstruction function R is complete iff: # R\lsemantics E \rsemantics A S0 = \Leftarrow S A \vdash E : S T Reconstruction Algorithm: # R\lsemantics #u \rsemantics A S = # # R\lsemantics I \rsemantics A[I:T] S = # # R\lsemantics (if E1 E2 E3) \rsemantics A S = # (let* (( (R\lsemantics E1 \rsemantics A S)) ; Get type of E1. # (S1' U(T1,bool,S1)) ; Force T1 to be bool # ( (R\lsemantics E2 \rsemantics A S1')) ; Get type of E2. # ( (R\lsemantics E3 \rsemantics A S2 )) ; Get type of E3. # (S3' U(T2, T3, S3))) # ) # # R\lsemantics (lambda (I1 \ldots In) E) \rsemantics A S = # R\lsemantics E \rsemantics A[I1=?v1, \ldots, In=?vn] S # # R\lsemantics (E0 E1 \ldots En) \rsemantics A S = # (let* (( (R\lsemantics E0 \rsemantics A S)) # ( (R\lsemantics E1 \rsemantics A S0)) # \ldots # ( (R\lsemantics En \rsemantics A Sn-1)) # (T U(T0, (\to (T1 \ldots Tn) ?vout)))) # ) [10/27/99 03:09 PM] Recitation > Type Reconstruction >> Unification Unification is a partial function (model errors as times when we try to apply the parial function to something that it doesn't have a value for). # U : Type \to Type \to Subst \to Subst # U(T1, T2, S) = U(T2,T1,S) # U(t, t, S) = S # U(?t, t, \emptyset) = {?t1=t} # U(?t, t, S) = [U((S ?t), t, \emptyset)+S] # U(?t1, ?t2, S) = S + {?t1=?t2} # U( (\to (T1) T3), (\to (T2) T3), S) # = U(T1,T2,S) + U(T3,T4,S) Reconstruct: # R\lsemantics (lambda (x y) (if x y 0)) \rsemantics \emptyset \emptyset # let = R\lsemantics(if x y 0)\rsemantics [x:?v1, y:?v2] \emptyset # R\lsemantics(if x y 0)\rsemantics [x:?v1, y:?v2] \emptyset \Rightarrow # = R\lsemantics x\rsemantics [x:?v1, y:?v2] \emptyset # Stest' = U(Ttest, bool, \emptyset) # R\lsemantics y\rsemantics A {?v1=bool} = # R\lsemantics0\rsemantics A {?v1=bool} = # U(?v2, int, {?v1=bool}) = {?v1=bool; ?v2=int} # <(\to (?v1 ?v2) int), {?v1=bool, ?v2=int}> Compose: # R\lsemantics (lambda (g x) (lambda (x) (g (f x))))) \rsemantics \emptyset \emptyset # = (((\to (a) b) (\to (c) a)) \to (\to (c) b)) >> Substitution Define a substitution as a function from type variables to (generic) types. Then if we want to apply the subst to a complex type, use the function complex-subst: # complex-subst\lsemantics S, primt\rsemantics = (S primt) # complex-subst\lsemantics S, t1\to t2\rsemantics = complex-subst\lsemantics S, t1\rsemantics \to CS\lsemantics S, t2\rsemantics Consider # (lambda (id) (if (id #t) (id 1) (id 0))) [10/28/99 01:07 PM] Lecture > Polymorphic Types Consider the examples: # (\lambda (x) x) # (\lambda (f) (\lambda (g) (\lambda (x) (f (g x))))) What do we do when we have underconstrained variables? For example, our type system will give (\to (?t) ?t) as the type for the identity function. How can we prove that a polymorphic type system is sound and complete? We defined let as: # A[I1=T1 \ldots In=Tn] \vdash Eb:T ; A \vdash Ei:Ti # --------------------------------- [mono-let] # A \vdash (let ((I1 E1) \ldots (In En)) Eb) : T but consider the expression: # (let ((id (lambda (x) x))) # (if (id #t) (id 1) (id 2))) The identity function (id) must be given a single type. Whenever it is used, its type gets defined.. So we need a new typing rule. # A \vdash [Ei/Ii] Eb:T # --------------------------------- [poly-let] # A \vdash (let ((I1 E1) \ldots (In En)) Eb) : T Here, we're replacing each identifier by its binding for type-checking, so each time an identifier is used, it can get its own binding.. We would prefer to not actually insert the text of Ei for every identifier.. Precompute its type scheme, and then use that whenever we want to check the type of a particular expression. So define type schemes (not a member of type): # TS := (generic (I*) T) # (generic (?t) (\to (?t) ?t)) Consider the following example: # (lambda (x) # (let ((y x)) # (if y 1 x))) We can't generalize pattern variables that are defined in the surrounding environment: they might get frozen later. rewrite polylet as (semantically equivalant): # A[I1:Gen(T1,A) \ldots In:Gen(Tn,A) \vdash Eb:T ; E\vdash Ei:Ti # ------------------------------------------ [poly-let] # A \vdash (let ((I1 E1) \ldots (In En)) Eb) : T # # Gen(T,A) = (generic (J1 \ldots Jn) T) # {Ji} = FreeTypeVar(T) - FreeTypeEnv(A) # # A[I:(generic (Ii In) T) \vdash [Ti/Ii] T # RGen(T, A, S) = Gen((S T) (S A)) Define reconstruction clauses: # R\lsemantics I \rsemantics A[I:(generic I1\ldots In T)] S = # <[?v1/I1] \ldots [?vn/In] T,S> # # R\lsemantics (let ((I1 E1) \ldots (In En)) Eb) \rsemantics A S = # (let* (( (R\lsemantics E1 \rsemantics A S)) # ( (R\lsemantics En \rsemantics A S_n-1))) # R\lsemantics Eb \rsemantics A[I1:Rgen(T1,A,Sn) \ldots In:Rgen(Tn,A,Sn)] Sn) # A[I1:Ti\ldots In:Tn] \vdash Ei:Ti # A[I1:gen(Ti A)\ldots In:gen(Tn A)] \vdash Eb:T # ------------------------------------------ [poly-letrec] # A \vdash (letrec ((I1 E1) \ldots (In En)) Eb) : T Thus, the new bindings in a letrec can be polymorphic in Eb.. But they can't be polymorphic when they call each other. How do we do the reconstruction algorithm? # R\lsemantics (letrec ((I1 E1) \ldots (In En)) Eb) \rsemantics A S = # (let* ((A0 A[I1:?v1 I2:?v2 \ldots In:?vn]) # ( (R\lsemantics E1 \rsemantics A0 S)) # ( (R\lsemantics En \rsemantics A0 S_n-1)) # (Sfinal unify U[(?v1 \ldots ?vn), (T1 \ldots Tn), Sn])) # R\lsemantics Eb \rsemantics A[I1:Rgen(T1,A,Sfinal) \ldots # In:Rgen(Tn,A,Sfinal)] # Sfinal) Consider the program: # (letrec ((x x)) # (if x 1 x)) This type-checks! The type of bottom is: # (generic (?v1) v1) Consider: # (letrec ((id (lambda (x) x)) # (f (lambda (y) + y (id y)))) # \ldots) What is the type of id? (\to (int) int) Lists: # Null: (generic (t) (\to () (list-of t))) # Cons: (generic (t) (\to (t (list-of t)) \to (list-of t))) # Car: (generic (t) (\to ((list-of t)) \to t)) # Cdr: (generic (t) (\to ((list-of t)) \to (list-of t))) Side effects: # Cell : (generic (t) (\to (t) (cell-of t))) # ^ : (generic (t) (\to ((cell-of t)) t)) # := : (generic (t) (\to ((cell-of t) t) unit)) But there's a problem! Consider the code: # (let ((x (cell (null)))) # (begin (update x (cons 1 (null))) # (if (car (^ x)) 2 3))) Just don't generalize expressions that have (immediate) side effects? Note that this doesn't include procedures with side effects inside them, because these will come later.. [11/02/99 02:04 PM] Lecture > Polymorphism II >> Scheme/XSP (explicity types, subtyping, polymorphism) Consider map. It's polymorphic type is: # (generic (t t2) (\to ((\to (t) t2) (list-of t)) (list-of t2))) This won't type check: # (lambda (map) # (map not? (map odd? ('1 2 3)))) But consider introducing explicity polymorphism with something like: # (lambda (map (poly (t1 t2) # (\to ((\to (t1) t2) (listof t1)) (listof t2)))) # ((proj map bool bool) not? # ((proj map int bool) odd? # ('1 2 3)))) We explicitly project with (proj Igeneric I1\ldots In), and explicitly make things polymorphic with (plambda (I1\ldots In) E). But these explicit plambdas and proj's make things look quite ugly: # (letrec # ((map (plambda (t1 t2) # (lambda ((f (\to (t1) t2)) (l (listof t1))) # (if ((proj null? t1) l) # ((proj null t2)) # ((proj cons t2) (f ((proj car t1) l)) # (proj map t1 t2) f ((proj cdr t1) l))))))) # \ldots) Extensions: # E ::= \ldots | (plambda (I1\ldots In) E) | (proj E T1\ldots Tn) # T ::= \ldots | (poly (I1\ldots In) T) New Typing rules: # A \vdash E:T # \forall i Ii \notin FTV(FV(E)) # ------------------------------------ # A \vdash (plambda (I1 \ldots In) E) : (poly (I1 \ldots In) T) # # A \vdash E : (poly (I1 \ldots In) T') T = [Ti/Ii]T' # ------------------------------------ # A \vdash (proj E T1 Tn) : T # # [Ii/Ji]T \sqsubseteq T' I' \in FrreeIds(T) # ------------------------------------ # (poly (I1 \ldots In) T) \sqsubseteq (poly (I1 \ldots In) T') # We need the "\forall\ldots" clause in plambda for a subtle reason. Consider: # ((proj # (plambda (int) # (lambda ((int x)) # (+ x 1))) # bool) # #t) # FTV(x:int) = x # FTV(x:(poly (int) int)) = \emptyset # ((proj # (plambda (con) # (lambda ((x (con int)) # (y (\to ((con int)) int)) # (y x)))) # listof) # '(1 2 3) (proj car int)) Consider: # \vdash (lambda ((x listof)) x) : (\to (listof) listof) Well-typed, but not useful.. It's impossible to call this procedure. >> Abstract Types We want to protect types that live together in a single address space from: - other people looking inside a type's values - other people modifying a type's values - other people forging a type's values We can use explicit polymorphism to create abstract types. Consider a polymorphic tree module: # M = (poly (t) # (module (new (t) (tree t)) # (get ((tree t)) t))) we want the client to know the type of m, but we don't want them to know the type of tree. i.e., we want the client to know the interface of m without knowing anything about the implementation. we make the implementaiton accept part of the client as a parameter. then the implementation can play with it in its own space.. consider the type of the implementation: # Timpl = (poly (r) (\to ((poly (tree) (\to (M) r)) r))) # (define impl # (plambda (r) # (lambda ((c (poly (tree) (\to (m r))))) # ((proj c listof) (module \ldots))))) # r is the result. # (poly (tree) (\to (M) r)) is a part of the client which # maps from a module to a result. Note that the impl # of M is hidden by the poly(tree).. # (define client # (lambda ((impl Timpl)) # ((proj impl r) # (plambda (tree) # (lambda ((mod M)) # (proj mod int) new 1))))) [11/04/99 02:10 PM] Lecture > Pattern Matching (by desugaring) # E ::= \ldots | (match E C*) # C ::= (P E) # P ::= L | _ | I | (I P*) For every constructor, assume a deconstructor, whose name is the constructor name with a "~" added to the end: # cons = constructor # cons~ = destructor # (define (cons~ value success failure) # (if (not (null? value)) # (success (car value) (cdr value)) # (failure value))) # cons~ : (generic (t t2) # (\to ( (listof t) # (\to (t (listof t)) t2) # (\to ((listof t)) t2)) # t2)) Note that we just test for null?, instead of calling pair?, because this type-checks. Consider: # (match E # ((cons 1 (cons x _)) (+ 1 x)) # (_ 2)) A first attempt at desugaring: # (let ((I1 E)) ;; Make sure we only evaluate E once. # (cons~ I1 (\lambda (I2 I3) # (if (= I2 1) # (cons~ I3 (\lambda (I4 I5) (let ((x I4)) (+ 1 x))) # (lambda (x) 2)) # 2)) # (lambda (x) 2))) Try this: # (let ((I1 E) (I6 (lambda (x) 2))) # (cons~ I1 (\lambda (I2 I3) # (if (= I2 1) # (cons~ I3 (\lambda (I4 I5) (let ((x I4)) (+ 1 x))) # I6) # (I6 I2))) # I6)) ;; (???) >> Desugaring Function # D\lsemantics (match E ((P1 E1) \ldots (Pn En))) \rsemantics = # (let ((id E)) # expandclause(P1, \ldots Pn, E1, \ldots En, id, basefailure))) # # expandclause(P1, \ldots Pn, E1, \ldots En, v, fail) = # (if (= n 0) # (fail v) ;; No patterns left -- fail. # (let ((id1 (\lambda (x) ;; id1 is the failure cont. # expandclause(P2, \ldots Pn, E2, \ldots En, v, fail)))) # (expandexp (P1, v, E1, id1)))) # # expandexp(pat, v, E, fail) # _ E # lit (if (lit-eq? L v) E (fail v)) # I (let ((I v)) E) # (I p1 \ldots pn) (I~ v (\lambda (id1 \ldots idn) e') f) # where e' = expandpat (p1, \ldots pn, id1, \ldots, idn, e, f) # # # expandpat(P1, \ldots Pn, id1, \ldots idn, s, fail) = # (if (= n 0) # s # (expandexp (P1, id1, e', fail) # where e'=expandpat(p2, \ldots pn, id2, \ldots idn, s, fail) = # # basefailure = (lambda (x) (error "oh no!!")) > Abstract Types # E ::= \ldots | (module D* B*) # B ::= (I E) # D ::= (define-datatype Iabs V*) | # (define-datatype (Iabs I1 \ldots In) V*) # V ::= (I T*) # (define-datatype sexp # (unit\to sexp unit) # (bool\to sexp bool) # (int\to sexp int) # (list\to sexp (list-of sexp))) Evaluate in scheme/R. Alpha-rename all type names. Since types are reconstructed, we don't need to remember names of types? # A[D*, I1:Tn, \ldots, In:Tn] \vdash Ei:Ti 1 \leq i \leq n # ------------------------------------------ # A \vdash (module D* (I1 E1) \ldots (In En)): (moduleof (I1 T1) \ldots (In Tn)) # # A[(define-datatype Iabs \ldots (In T1\ldots Tn)\ldots)] # \vdash In:(\to (T1\ldots Tn) Iabs) # # A[(define-datatype Iabs \ldots (In T1\ldots Tn)\ldots)] # \vdash In~:(\to (Iabs (\to (T1\ldots Tn) T) (\to (Iabs) T)) T) # ; T arbitrary # # A[(define-datatype (Iabs I1 \ldots In) \ldots (Iv T1\ldots Tn)\ldots)] # \vdash Iv:[T'i/Ii](\to (T1\ldots Tn) (Iabs I1\ldots In)) ; T'i arbitrary [11/09/99 01:07 PM] Lecture > Concurrency 2 reasons to emply concurrency: - performance - simplicity >> Fork/Join # E ::= \ldots | (fork E) | (join E) | (thread? E) The following is more-or-less equivalant to E1: # (let ((t (fork E1))) # (join t)) We can join a thread more than once: just get the value from the thread more than once. Thus, the value produced by the thread has to be kept around as long as any thread handlers that point to it are still accessible. # (define (parallel-map f l) # (map (lambda (x) (join x)) # (map (lambda (x) (fork (f x))) l))) What if f has side effects? Have to be careful\ldots # T \in Thread-Handle = IntLit # A \in Agenda = Thread-Handle \to Exp # Configuration = Agenda \times Store SOS: # E \Rightarrow E' # -------------------------- # \Rightarrow # # \Rightarrow (T' new) # # \Rightarrow # # \Rightarrow # ---------------------------------------- # \Rightarrow This won't work very well if we fork more than one increment! on the same cell: # (define (increment! (lambda (c) # (begin (cell-set! c (+ 1 (cell-ref c))) # (cell-ref c))))) Consider: # (let ((c (cell 0))) # (let ((t1 (fork (increment! c))) # (t2 (fork (increment! c)))) # (+ (join t1) (join t2)) This might produce 2, 3, or 4. >> Obtain! / Release! Add a locking mechanism: # E ::= \ldots | (lock) | (obtain! E) | (release! E) In general, if we have shared mutable data, we may need a lock. Make some sugar to help avoid programming errors.. # D\lsemantics(monitor I E)\rsemantics = (let ((I (lock))) E) # D\lsemantics(exclusive El Eb)\rsemantics = # (let ((Il El)) # (obtain! Il) # (let ((Iret Eb)) # (release! Il) # Iret)) Rewrite the above example as: # (monitor c-lock # (let ((c (cell 0))) # (let ((t1 (fork (exclusive c-lock (increment! c)))) # (t2 (fork (exclusive c-lock (increment! c))))) # (+ (join t1) (join t2)) SOS: # \Rightarrow L fresh # \Rightarrow # \Rightarrow If we wanted to make sure that only the thread that locked something can release it, replace "S[L=#t]" with "S[L=T]"\ldots Deadlock fun! Construct a wait-for diagram: - If A holds L, draw an arrow from L to A. - If A is waiting for L, draw an arrow from A to L. - If this diagram has a cycle, then we're in deadlock. Very hard to construct wait-for diagrams in distributed systems. >> Condition Variables # E ::= \ldots | (condition) | (wait E) | (notify E) Allows us to do something like: # while not P do # wait C Example: # (monitor count-up # (let ((count (cell 0)) # (change (condition))) # (module # (increment (lambda () # (exclusive count-up # (begin (:= count (+ 1 (^ count))) # (notify change) # (^ count))))) # (wait-til-n (lambda () # (if (>= (exclusive count-up (^ count)) n) # #u # (begin (wait change) # (wait-til-n n))))) But what if someone changes the value after we do the if condition. Then the if condition sees a value of 99, but then the notify happens before the wait occurs.. :( There are a couple ways to fix that.. E.g., move the exclusive outside the if, and redefine wait to atomically unlock count-up and go into wait state. SOS: # CV = CV-State \times CV-Queue # CV-Sate = {wakeup-waiting | no-wakeup} # CV-Queue = Thread-Handle* # \Rightarrow # ]> # # ]> \Rightarrow # ]> # # ]> \Rightarrow # ]> # # ] \Rightarrow # ]> # # ]>] \Rightarrow # ]> [11/10/99 03:09 PM] Recitation > Abstract Datatypes # QI = (recordof (new (poly (t) (\to () (queueof t)))) # (add (poly (t) (\to (t (queueof t)) (queueof t)))) # (examine (poly (t) (\to (int (queueof t)) t)))) # (define queue-client T # (plambda (queueof) # (lambda ((queue QI)) # ((proj queue.examine string) 1 # ((proj queue.add string) "skating" # ((proj queue.add string) "study" # ((proj queue.new string)))))))) # # T = (poly (queueof (\to (QI) string))) # # ((proj queue-impl string) queue-client) # # queue-impl: T2 = (poly (t) (\to T t)) Now let's implement queue: # (define queue-impl T2 # (plambda (t) # (lambda ((qclient (poly (queueof) (\to (QI) t)))) # ((proj qclient listof) # (record (new null) # (add cons) # (examine my-list-ref)))))) # # Note that: null \equiv (poly (t) (proj null t)) Try an implementation more like: # QI' = (poly (t) (recordof (new (\to () (queueof t))) # (add (\to (t (queueof t)) (queueof t))) # (examine (\to (int (queueof t)) t)))) # # (define queue-client T # (plambda (queueof) # (lambda ((queue QI')) # (let ((strqueue (proj queue string))) # (queue.examine 1 # (queue.add "skating" # (queue.add "study" # (queue.new)))))))) Now let's implement queue': # (define queue-impl T2 # (plambda (t) # (lambda ((qclient (poly (queueof) (\to (QI) t)))) # ((proj qclient listof) # (plambda (t) # (record (new (proj null t)) # (add (proj cons t)) # (examine (proj my-list-ref t)))))))) # # Note that: null \equiv (poly (t) (proj null t)) [11/16/99 01:01 PM] Lecture > Effects & Types Consider the program: # (let ((c (cell 0))) # (:= c 2) # (^ c)) The type of c is: # c: (cellof int) If we want to, we can add more type information. For example, we could give cells "colors" (regions): # c: (cellof int blue) The color might, e.g., correspond to the region of the store that contains the cell. Then we can run 2 threads in parallel safely if they use different regions of the store. Also, we could try to figure out how long a cell lives by seeing when colors become unavailable. # T ::= \ldots | (cellof T R) # R ::= I So each cell has a region. Can we put every cell in a different region? No, because types have to match.. E.g.: # (if p (cell 0) (cell 1)) : (cellof int I) But we want to maximize diversity of colors. Define three effects: (init R), (read R), (write R). There is an ACUI algebra "maxeff".. (define an identity element "pure"). Redefine procedures: # T ::= (\to (T1\ldots Tn) F Tr) Where "F" is the latent effect -- it describes what the procedure will do when it's called. Define a type/effect operator: # A \vdash E : T ! F For example: # A[I:Tin] \vdash E : Tout ! F # ---------------------------------------------- # A \vdash (lambda (I) E) : (\to (Tin) F Tout) ! pure # # A \vdash Ep : (\to (T1) Flatent Tout) ! Fp # A \vdash E1 : T1 ! F1 # -------------------------------------------- # A \vdash (Ep E1) : Tout ! (maxeff F1 Fp Flatent) Consider the type of: # E1 = (lambda (c) # (:= c (+ 1 (^ c))) # (^ c)) # # E1 : (\to ((cellof int r)) (maxeff (read r) (write r)) int) If we generalize this, then we can use it on any region. If we don't generalize it (e.g., if we pass it as an argument), then we just force every thing that the procedure is called on to be in the same region. # (lambda (f x) (f x)) : (\to ((\to (t1) f t2) t1) f t2) Thus we can get polymorphism in effects! Add the rule: # A \vdash E : T ! F' ; F' \leq F # --------------------------- # A \vdash E : T ! F But now we can't reconstruct very easily. We have to use constraint propagation to do reconstruction. Define reconstruction algorithm Z. # Z\lsemantics E \rsemantics A S = # C ::= (I F)* ; map each effect I to a minimum effect F. # TS ::= (generic ((I D)*) T C) ; type scheme # D ::= type | effect | region # increment : (generic ((r region) (e effect)) # (\to (cellof int r) e int) # ((e \geq (read r)) (e \geq (write r)))) Since all effects are variables, we can unify two effects simply by setting them both to their union. Consider reconstructing lambdas: # Z\lsemantics (lambda (I1 \ldots In) Ebody) \rsemantics A S = # let be # Z\lsemantics Ebody \rsemantics A[I1:?v1, \ldots, In:?vn] S in # <(\to (?v1 \ldots ?vn) ?e Tbody), pure, Sbody, (\geq ?e Fbody)+Cbody> Reconstructing lets: # Z\lsemantics (let ((I1 E1) \ldots (In En)) Eb) \rsemantics A S = # let = Z\lsemantics E1\rsemantics A S in # \ldots # let = Z\lsemantics En\rsemantics A Sn-1 in # Z\lsemantics Eb \rsemantics A[I1:ZGen(T1,A,Sn,C1+\ldots+Cn) \ldots # In:ZGen(Tn,A,Sn,C1+\ldots+Cn)] Sn ZGen: # ZGen(T, A, S, C) = (generic (I1\ldots In) T C) # where {I1\ldots In} = FV(S T) + FV(S C) - FTV (S A) Consider cwcc: # cwcc : (\to ((\to ((\to (t1) t2)) t1)) t1) Now try adding effects! Whee! Add two new effects: goto and comefrom. # cwcc : (\to ((\to ((\to (t1) (goto r) t2)) # f t1)) (maxeff f (comefrom r)) t1) If we have an expression E, with (comefrom r) and/or (goto r) in it, and r isn't in the types of the FV's of T or in the output type of E, then we can ignore r.. This is true because the region can't exit/enter the region\ldots When can we free cells? Once their region goes away.. [11/17/99 03:02 PM] Recitation > Effects Some examples: # (lambda (c) (^ c)) : (\to ((cellof t r)) (read r) t) ! pure # # E1= (let ((c (cell 1))) ; This cell is r3 # (if (^ (cell #t)) ; This cell is r2 # (cell 0) # (begin (:= c 5) (cell 1)))) ; These cells are r1 # # E1: (cell-of int r1) ! (init r1), (init r2), (read r2), # (init r3), (write r3) Clearly, r2 can be thrown away once we leave the if statement, since there's no longer any way to get to it. # Z : Exp \to Type-Env \to Subst \to Each expression tends to inherit most of the constraints of its sub-expressions. In the end, find the minimum solution of the constraints. [11/18/99 01:09 PM] Lecture > RPC Remote procedure call: let a procedure on one computer call a procedure on another. Server needs an interface. Use a client stub and a server stub to handle the actual communications. Given an interface, we should be able to automatically implement stubs, such that p() in client stub sends a message to server stub that tells it to actually call p(). Send the args, too.. Then server p() returns answer, and server stub uses it to return a value etc. whee. Define an interface description language (IDL). Write ain interface in IDL, and it will give us the server & cleint stubs. Bind RPC modules as such: # (let ((m (open-remote "rpc://lcs.mit.edu/foo"))) # (m.p 1 2)) Differences between RPC and normal calls: - limitations on acceptable data values (e.g., hard to use cells, procedures, thread ids, exceptions)\ldots But most of these can be made to work with an advanced RPC scheme.. - communication failures -- impossible to tell the difference between network errors and server errors. Give three different semantics to procedure calls: - at least once (idempotent) - at most once: server throws away packets with the same identifier. It needs to remember packets that it saw before it crashed. It's ok to complete something partially. - exactly once: everything is atomic. Action is done exactly once or not at all. > Applet Saftey Checks we can do on applets: - Run-time checks - Types - Control Environment Access (control applet's namespace) - Hardware Protection - Software fault isolation: rewrite object code such that all the stores are checked when the program is run to make sure they don't do anything bad. (5-30% slowdown) Give each applet a principle (signiture) which gets fed into a policy engine to select what types of thing the applet can do\ldots E.g., if the applet's principle is a big company, we might give it more access than if its principle is some random guy.. Protect file systems and network. >> Simple model Label each proc as local or remote. If we access disk or network, look up call stack.. if any remote procs, deny.. (allow network connections back to the applet's source sometimes). >> Capability Model Give applets capabilities: unforgable tokens that permit access. Problems: need to rewrite applets; also, confinement: how do we keep capabilities from being coppied? >> Extended Stack Introspection - enablePriviledge(target) - disablePriviledge(target) - checkPriviledge(target) Target can be a string like "network access.." Whenever you're about to do something, check that we have that priviledge. An applet is allowed to enable a priviledge if its principle is enabled for that priviledge. # checkPriveledge(Target target) { # foreach stackFrame { # if stackFrame is enabled for target return ok # if policyEngine disallows target for principle # then return Failed # } # return(default) # } >> ELFS whee Create a log of operations and undo operations.. So you can get rid of it if you want. Also, you can audit your operations, e.g., to see who has seen certain files.. [11/23/99 01:07 PM] Lecture > Pragmatics We will compile a subset of scheme: # P ::= (program (define I E)* E) # E ::= L | I | (lambda (I*) E) | (call E E*) | # (primop P E*) | (let ((I E)*) E) | (if E E E) | # (set! I E) | (letrec ((I E)*) E) # # Sugar: (primop P E*) = (%P E*) >> Steps of compilation: - Desugar - Globalize (eliminate global variables) - Assignment Convert (get rid of set!) - CPS Convert - Closeure Convert - Lambda Lifting - Data Conversion - Code Gen (turn into register machine code) > Desugaring How should we deal with letrec? Fixed-point operator is inefficient. Use the following instead: # D\lsemantics (letrec ((I1 E1) \ldots (In En)) Eb) \rsemantics = # D\lsemantics (let ((I1 #f) \ldots (In #f)) # (set! I1 E1) # \ldots # (set! In En) # Eb) \rsemantics But this only works for procedures: if evaluating any E1\ldots En references some other E1\ldots En, then it will get the wrong value. So run a syntactic checker to make sure letrec gets procedures. Program desugars into a letrec.. Let desugars into procedure call.. > Globalizing (c.f. linking with a standard library) >> Inlining One way: textually substitute primops into the program. But set! lets us change the meanings of things like + >> wrapping Alternative: wrap the whole program in a (let \ldots) that defines each identifier. >> combined approach Inline whenever you can, and wrap only when necessary. We can't inline something if it's set!'ed. Also, we can't pass primops as procedures, so enclose them in lambdas if they're passed to funcs. > Assignment Convertion # A\lsemantics I \rsemantics = (%cell-ref I) # # A\lsemantics (lambda (I1 \ldots In) E) \rsemantics = # (lambda (I1 \ldots In) # (let ((I1 (%cell I1)) \ldots (In (%cell In))) # A\lsemantics E\rsemantics)) # # A\lsemantics (set! I E) \rsemantics = (cell-set!% I A\lsemantics E \rsemantics) But we don't need to convert things that don't get set!'ed. So do a syntactic check to make sure.. > CPS Convertion (Continuation Passing Style) Transform a program so that procedures never return (don't need a call stack). Also, transform it so there are no nested expressions. This allows us to perform the same optimizations on both the user's data/code and the system's data/code. >> A Simple CPS Converter Use continuations to model control flow, and make it explicit. Every CPS-converted expression will give a procedure that takes a continuation. # CPS\lsemantics I \rsemantics = (lambda(k) (call k I)) # # CPS\lsemantics L \rsemantics = (lambda(k) (call k L)) # # CPS\lsemantics (call Ep E1 \ldots En) \rsemantics = # (lambda(k) (call CPS\lsemantics Ep \rsemantics (lambda (vp) # (call CPS\lsemantics E2 \rsemantics (lambda (v1) # \ldots # (call CPS\lsemantics En \rsemantics (lambda (vn) # (call vp v1 \ldots vn k)))))) # # CPS\lsemantics (lambda (I1 \ldots In) E) \rsemantics = # (lambda (k) # (call k (lambda (I1 \ldots In k') # (call CPS\lsemantics E \rsemantics k')))) # # CPS\lsemantics (if E1 E2 E3) \rsemantics = # (lambda (k) # (call CPS\lsemantics E1 \rsemantics # (lambda (v1) # (if v1 (call CPS\lsemantics E2 \rsemantics k) # (call CPS\lsemantics E3 \rsemantics k))))) # # CPS\lsemantics (primop P E) \rsemantics = # (lambda (k) (call CPS\lsemantics E \rsemantics (lambda (v1) # (let ((v2 (primop P v1))) (call k v2))))) CPS outputs expressions of the form: # Ecps ::= (call V V*) | # (let ((I W)*) Ecps) | # (if V Ecps1 Ecps2) # W ::= V | (primop P V*) # V ::= L | I | (lambda (I*) Ecps) But CPS produces a -lot- of code\ldots Expecially a lot of lambdas. For example: # CPS\lsemantics (%odd? 1) \rsemantics = # (lambda (k) (call (lambda (k') (call k' 1)) # (lambda (x) (call k (%odd? x))))) We would prefer something like: # (lambda (k) (call k (%odd? x))) We can do that with lambda-reduction. Consider CPS converting a let: # CPS\lsemantics (let ((I Ed)) Eb) \rsemantics = # (lambda (k) (call CPS\lsemantics Ed \rsemantics (lambda (vd) # (let ((I vd)) (call CPS\lsemantics Eb \rsemantics k))))) [11/30/99 01:04 PM] Lecture >> Meta-CPS (A smarter CPS converter) # MCPS: Exp \to Meta-Continuation \to Exp # Meta-Continuation: Exp \to Exp Top level meta-continuation: # [\lambda V . (call *top* V)] Where the square brackets indicate application at the meta-level\ldots # [MCPS\lsemantics 1 \rsemantics [\lambda V.(call *top* V)]] = (call *top* 1) We can convert a meta-continuation into a real continuation by using: # [exp\to meta-cont k] = [\lambda V.(call k V)] We can convert a real continuation into a meta-continuation by using: # [meta-cont\to exp m] = (lambda (t) [m t]) But this gives us the following, which is bad for tail calls: # [meta-cont\to exp [exp\to meta-cont k]] = (lambda (t) (call k t)) We want instead to have: # [meta-cont\to exp [exp\to meta-cont k]] = k so just define it as a special case for meta-cont\to exp.. # MCPS\lsemantics L \rsemantics = [\lambda m . [m L]] # # MCPS\lsemantics I \rsemantics = [\lambda m . [m I]] # # MCPS\lsemantics (primop P E1 E2) \rsemantics = # [\lambda m.[MCPS\lsemantics E1\rsemantics [\lambda V1. # [MCPS\lsemantics E2\rsemantics [\lambda V2. # (let ((temp (primop P V1 V2))) [m temp])]]]]] # # MCPS\lsemantics (lambda (I1\ldots In) Eb) \rsemantics = # [\lambda m. [m (lambda (I1\ldots In k) # [MCPS\lsemantics Eb\rsemantics [exp\to meta-cont k]])]] # # MCPS\lsemantics (call E1 E2) \rsemantics = # [\lambda m. [MCPS\lsemantics E1\rsemantics [\lambda V1. # [MCPS\lsemantics E2\rsemantics [\lambda V2. # (call V1 V2 [meta-cont\to exp m])]]]]] # # MCPS\lsemantics (if E1 E2 E3) \rsemantics = # [\lambda m. [MCPS\lsemantics E1\rsemantics [\lambda V1. # (let ((k [meta-cont\to exp m])) # (if V1 # [MCPS\lsemantics E2\rsemantics [exp\to meta-cont k]] # [MCPS\lsemantics E3\rsemantics [exp\to meta-cont k]]))]]] An example: # MCPS\lsemantics(%+ (%* (%- 3 1) (%/ 9 3)) (%- 10 6))\rsemantics [\lambda V.(call *top* V)] # = (let* ((t1 (%- 3 1)) # (t2 (%/ 9 3)) # (t3 (%* t1 t2)) # (t4 (%- 10 6)) # (t5 (%+ t3 t4))) # (call *top* t5)) Another example: # MCPS\lsemantics(%+ z (if b (%+ x 1) (%- x 1)))\rsemantics [\lambda V.(call *top* V)] = # (let ((k (lambda (t1) # (let ((t2 (%+ z t1))) (call *top* t2))))) # (if b (let ((t3 (%+ x 1))) (call k t3)) # (let ((t4 (%- x 1))) (call k t4)))) Yet another example: # (begin (define (fact n) (if (= n 0) 1 (* n (fact (- n 1))))) # (fact 3)) Desugar/globalize/assignment conversion gives us: # (let ((fact (%cell #u))) # (%cell-set! fact (lambda \ldots)) # (call (%cell-ref fact) 3)) Consider how to MCPS the lambda that defines fact: # (lambda (n k2) # (let ((t3 (%= n 0)) # (if t3 # (call k2 1) # (let ((t4 (%cell-ref fact))) # (let ((t5 (%- n 1))) # (call t4 t5 (lambda (t6) # (let ((t7 (%* t6 n))) # (call k2 t7)))))))))) [12/01/99 03:04 PM] Recitation >> MCPS conversion, continued A meta-continuation wants a specific type of expression as its input. In particular, a metacontinuation is like an expression with a hole.. If you put the wrong thing in that hole, it won't work. An expression of a metacontinuation is: # my_mcont = (lambda (exp) `(primop + 5 (primop + ,exp))) In particular, the argument to a metacont must be: - a literal (including a lambda) - an identifier # (define (mcps exp mcont) # (cond ((id? exp) (mcps-id exp mcont)) # ((lit? exp) (mcps-lit exp mcont)) # ((let? exp) (mcps-let exp mcont)) # \ldots)) # (define (mcps-id id mcont) (mcont id)) # (define (mcps-lit lit mcont) (mcont lit)) # (define (mcps-if exp mcont) # (let ((test (cadr exp)) # (con (caddr exp)) # (alt (cadddr exp))) # (mcps test # (\lambda (simp) `(let ((k (\lambda (t) ,(mcont 't)))) # (if ,simp # ,(mcps con # (lambda (simp) # '(call k ,simp))) # ,(mcps alt # (lambda (simp) # '(call k ,simp))))))))) Try mcps'ing a let with one binding: # (define (mcps-let exp mcont) # (let ((var (let-var exp)) # (binding (let-binding exp)) # (body (let-body exp))) # (mcps binding # (\lambda (s) # `(let ((,var ,s)) # ,(mcps body mcont)))))) Try mcps'ing cwcc! :) # (define (mcps-cwcc exp mcont) # (let ((mcps-proc (cadr exp))) # `(let ((exit (lambda (t trash) ,(mcont 't)))) # ,(mcps `(call ,mcps-proc exit) # (lambda (s) `(call exit ,s 'trash)))))) [12/02/99 01:08 PM] Lecture > Closure Conversion We want to get rid of all the free variables in sub-expressions. For example, consider: # (let ((p1 (lambda (n) # (lambda (k) (+ n k))))) # (let ((p2 (p1 7))) # (p2 p3))) Assume we have closure primops, which basically construct and reference tuples: # (%closure E\lambda E1 \ldots En) ; make a new closure # (%closure-ref Ec n) ; nth elt of Ec Let each procedure take its own closure as a variable. # (let ((p1 (%closure (lambda (.c1. n)) # (%closure (lambda (.c2. k) # (+ (%closure-ref .c2. 1) k)) # n)))) # (let ((p2 (call-closure p1 7))) # (call-closure p2 p3))) # # Where (call-closure Ec E1 \ldots En) = # (let ((Itemp Ec)) # (call (%closure-ref Itemp 0) # Itemp E1 \ldots En)) >> Design Choices in Closure Conversion >>> Nested vs. flat closures # (lambda (a b) # (lambda (c d) # (lambda (e f) # (a c e)))) Closure might be: # <\lambda3, a, c> ; \gets flat # <\lambda3, ptr, c> # where ptr points to the closure <\lambda2, a> Doing set! on a flat closure would be a problem, except that we've already done assignment conversion! Most real compilers use nested frames, and put them on the stack. > Lambda Lifting # (let ((p1 (%closure (lambda (.c1. n)) # (%closure (lambda (.c2. k) # (+ (%closure-ref .c2. 1) k)) # n)))) # (let ((p2 (call-closure p1 7))) # (call-closure p2 p3))) # \to # (program # (define .l1. (lambda (.cl1. n) (%closure .l2. n))) # (define .l2. (lambda (.cl2. k) (+ (%closure-ref .c2. 1) k))) # (let (p1 (%closure .l1.))) So now we have a program of the form: # (program (define .l1. \ldots) # \ldots # (define .ln. \ldots) # Ecps) Convert each .li. into code blocks. Then the lambdas are all basically labels, and we can branch between them.. After closure conversion, lambda lifting, and cps conversion\ldots we have something pretty simple. > Data Conversion In order to allow GC to work, we need to tag data items etc. Assume a 32 bit words. Allocate low 2 bits for a tag: - 00 = int - 01 = pointer - 10 = immediate - 11 = block header Register words should never end in 11. Intermediate values: - \ldots 0010 = boolean (b=0 is false, b=1 is true) - \ldots0110 = nil - \ldots1010 = unspecified - \ldots1110 = character c. Block headers: where type is: - 0000 = cell - 0001 = pair - 0010 = vector - 0011 = string - 0100 = closure - 0101 = code [12/07/99 01:03 PM] Lecture > Garbage Collection (Lecturer: Olin Shivers) GC is dynamic, types are static. Try to do static garbage collection: figure out when we will know that memory will never be re-used. Then we can just de-allocate it at that point, & avoid gc costs.. Static GC can't be complete, because sometimes you can't construct the appropriate proofs.. >> 3 types of dynamic GC: - stop & copy - mark & sweep - ref count >>> stop & copy Basically equivalant to breadth first search of data structures.. You could try alternates like DFS, which will tend to give you better locality. But it has somewhat higher constants, so it's not used very much. >>> Comparisons - allocation is cheaper in S&C than mark&sweep - locality of newly allocated data is better in S&C than M&S - S&C does better with large memories, since it only has to deal with the good stuff. - In S&C, if we don't know if something is an int or a pointer, then we can't tell what to do: if we treat it as a ptr, we'll change the int; if we treat it as an int, we may get dangling pointer problems. But in M&S we can just guess things are pointers, and since we don't move them, we'll be safe. So M&S can be good in C. - Ref counting has no long pauses - Ref counting is timely: you get memory back as soon as it gets freed. - Ref counting has trouple with circular data structures. - Ref counting is slow >> Variants of GC >>> Generational GC Make some observations to help us figure out how to optimize gc: - young things die frequently. So we can divide our allocation space into two generations: young generation is small, old is large. Do a "minor collection" where you gc the small space and move the live stuff into the old generation. But what do we do when old generation points into new generation? - young things usually point to old things. So we can make the old/young generation thing work by keeping track of new pointers from old generation into young generation, and add that to the root set. >>> Real-Time Concurrent GC - GC introduces bounded pauses. - GC runs independantly of other processes. Break from-space and to-space into pages. Maintain the invariant that the registers always point into to-space. Ever page in to-space has a bit that indicates whether everything has been copied to to-space yet. Protect the pages where not everything points into to-space (for reads). If we trap on a read of one of those pages, then run gc for that page. (But traps are really slow so people don't do this) >> Atmoic Allocs Allocations need to be atomic. But standard methods add a lot of overhead. Different methods to give light-weight atomicity to allocaiton: - forwards: We could declare that we'll never get interrupts in the middle of basic blocks (blocks of code where you go streight through). Handle hardware interrupts at the end of basic blocks, and check at the beginning of each basic block that we have enough free memory. Alternative: to do allocs, jump to alloc code. If we're in that code, wait to do interrupts until we're done with the alloc code. - backwards: Forwards allocation doesn't deal well with page faults. Page faults can make basic blocks take a long long time to finish. Another idea: if we wait to increment the fp until the very end of the alloc, then the "fp+=8" or whatever is an atomic point. So if we get interrupted in the middle of the alloc, then check when we return from the interrupt.. If we see that we're in the middle of an alloc, then restart the alloc. But painful to port and high overhead etc. - backwards II: Use the low bits of the fp for lock bits. The last instruction (fp+=7) simulatneously does commit of the alloc and free-ing of the lock. - sideways: give each thread its own allocation pool.. Fragmentation and wasted memory. [12/08/99 03:04 PM] Recitation > Review >> Content of the class * Dynamic Semantics: how can you specify what a program means? - Operational Semantics - Denotational Semantics - Fixed points - "Standard" semantics and control - Translators * Static Semantics - Type Systems - Monomorphic - Subtypes - Polymorphic - Type reconstruction - Abstract types & Pattern matching (playing with types) - Effect Systems & Effect Reconstruction * Pragmatics - Compiler - Runtime - Data Representations - Garbage Collection The end.