Saturday, March 22, 2008
Sunday, March 16, 2008
Types at reddit
I have created a subreddit for types. (Static analysis is also welcome.)
Posted by Jim Apple at 3/16/2008 09:31:00 PM 1 comments
Friday, February 22, 2008
Types, named (was: "Name that type!"), plus two more questions
Here are the answers to the previous quiz.
The first datatype holds any factorial number of values. Here it is again, with one small change and a few deobfuscations:
> data FacHelp (f :: * > *) (y :: *) = > Stop y >  More (FacHelp (OneMore f) (f y)) > newtype OneMore (f :: * > *) (y :: *) = > OneMore (y, f y) > newtype Identity y = Identity y > type Factorial y = FacHelp Identity y
Here's that datatype again, renamed:
> data LongList x = Two x x >  AndAnother x (LongList x) > data Composite x = Composite (LongList (LongList x))Update:
Chungchieh Shan pointed out to me that I got that datatype wrong. (The other text above is correct, I think.) Anyway, here's his counterexample:
> test :: Composite Char > test = Composite (Two (Two 'a' 'b') > (AndAnother 'c' (Two 'd' 'e')))This is a correct (I think!) implementation of a container that can't be of a prime size:
> data Product f g a = Times (f (g a)) >  LeftIncr (Product (OneMore f) g a) >  RightIncr (Product f (OneMore g) a) > newtype Pair a = Pair (a,a) > type Comp = Product Pair PairEnd update
The third datatype uses the type H
, which encodes containers with a square number of elements. H
is based on the idea that the sequence of partial sums of the sequence of odd natural numbers is the same as the sequence of squares. That is, {0, 1, 1+3, 1+3+5, …} = {0^{2}, 1^{2}, 2^{2}, 3^{2}, …}
We can now use Lagrange's foursquare theorem to see that question 3(b) is, strangely, a container that can hold any number of elements. An extension of this by Legendre showed [cite] that every number not of the form 4^{k}(8m + 7) is representable by the sum of three squares, and this answers question 3(a).
Here are both parts again, renamed and cleaned up a bit:
> data SqHelp x y = End >  Extra x (SqHelp (y,y,x) y) > type Square a = SqHelp a a > data Legendre x = Legendre (Square x) (Square x) (Square x) > data Lagrange x = Lagrange (Square x) (Legendre x)Here are two more questions:
4. Can we write something analogous to 3(a) in a more direct way?
5.
> data M a b = N b >  O (M (a,b) a) > newtype P a = P (M a a)
Posted by Jim Apple at 2/22/2008 09:56:00 PM 4 comments
Labels: GADTs, nested types
Name that type! (nested types edition)
If we want to write a type for binary trees in Haskell, it is easy to do so:
> data BinaryTree a = Leaf >  InternalNode (BinaryTree a) a (BinaryTree a)The structure of the type guarantees certain things about its inhabitants. For instance, every node has exactly two children. If we want to guarantee stronger invariants, we can change the shape of the type. We can write a a type for complete binary tree (that is, binary trees in which every leaf is at the same depth), as
> data CompleteBinaryTree a = Leaves >  NonLeaves a (CompleteBinaryTree (a,a))The argument in the recursive part of this datatype is not the same as it is in the declaration. Datatypes with this property are called nested or nonregular or heterogeneous or nonuniform. We can express all sorts of invariants with these datatypes. Here's another example:
> data AList a b = Nil >  Cons a (AList b a)If we want to manipulate lists that alternate between two types
a
and b
, we could use the
type [Either a b]
, but this doesn't ensure that we don't
have two a
s or two b
s in a row. We could,
instead, use the type [(a,b)]
, but this would only allow
us to create lists of even length. By switching the position of the
parameters in the recursion in AList
, we can represent
alternating lists of any length.
Here are some other examples of data structure invariants we can ensure with nested types:
 Cyclic data structures (mirror)
 Many different types of lists with
fast
cons
,snoc
, concatenation, and indexing, including  Redblack trees (a paper, code for that paper), AVL trees (mirror, mirror 2)
 Square matrices, triangular matrices (in various works by Ralph Matthes), Toeplitz matrices, rectangular matrices (see "Manufacturing Datatypes", above)
 Efficientmerge heaps (see Okasaki's thesis & book, above), Braun trees, leftcomplete trees (see "Manufacturing Datatypes", above)
 Binomial queues (mirror, mirror 2)
 Tries (mirror, mirror 2, code, code mirror, old version, old version mirror) (See also Okasaki's thesis & book)
 Wellformed terms in the locallynameless lambda calculus (mirror)
> data A x y = B (x y) >  C (A (D x) (x y)) > newtype D x y = D (y,x y) > newtype K x = K x > type QuestionOne = A K > > data E x = F x x >  G x (E x) > data QuestionTwo x = Q2 (E (E x)) > > data H x y = I >  J y x x (H (y,x) y) > type L = H () > data QuestionThreePartA x = Q3a (L x) (L x) (L x) > data QuestionThreePartB x = Q3b (L x) (QuestionThreePartA x)
Posted by Jim Apple at 2/22/2008 03:25:00 AM 7 comments
Labels: GADTs, Leibniz equality, nested types
Sunday, February 17, 2008
Leibniz and Liskov
Leibniz equality of two types a
and b
is the proposition that in any context, one can be substituted for the other:
f a > f bA value of this type brings to the value level the proof that two types are equal. The only value of this type is the identity function.
The Liskov substitution principle is very similar to Leibniz equality, but only in the contravariant position: If a function takes an argument of type b
and a
is a subtype of b
, then the function must accept an argument of type a
. If f
is a type constructor that uses its argument only negatively, then a
is a subtype of b
if there is a value of type
f b > f aFor example,
λx . x > b
uses x
negatively, and any b > b
is also trivially an a > b
when a
is a subtype of b
. The only value of type f b > f a
is the cast
function.
Let's write the kind for type constructors that use their argument negatively as * ()> *
rather than just * > *
, and the kind for type constructors that use their argument positively as * (+)> *
. Then, the following terms should validate our understanding that supertyping is dual to subtyping:
dual : ∀ a:* . ∀ b:* . ∀ g: * (+)> * . (∀ f: * ()> * . f b > f a) > g a > g b dual a b g less bot = less [λx:* . g x > g b] (λy:g b . y:g b) bot dual' : ∀ a:* . ∀ b:* . ∀ f: * ()> * . (∀ g: * (+)> * . g a > g b) > f b > f a dual' a b f more top = more [λx:* . f x > f a] (λy:f a . y:f a) topSo subtyping, like type equality, can be brought to the value domain. I now have two questions:
Posted by Jim Apple at 2/17/2008 03:02:00 PM 0 comments
Labels: Leibniz equality, Liskov substitutability
Wednesday, January 9, 2008
Extra type safety using polymorphic types as firstlevel refinements
This post is literate Haskell.
I will demonstrate a new technique for using polytypes as firstlevel refinement types. (mirror). The goal, as usual, is for types to better express program invariants and ensure programs are safe.
I'm going to demonstrate using the risers
function, as presented in Dana N. Xu's ESC/Haskell (mirror), which references Neil Mitchell's Catch.
> {# OPTIONS fglasgowexts #} >  The LANGUAGE pragma is usually a pain for exploratory programming.
Below are the risers
functions as presented by Xu and
Mitchell. They are the same function, though slightly syntacticly
different. risers
returns the sorted sublists of a
list.
Risers has two properties that we are going to discuss:
 None of the lists in the returned value are empty
 If the argument is nonempty, the return value is also nonempty.
> risersXu :: (Ord t) => [t] > [[t]] > risersXu [] = [] > risersXu [x] = [[x]] > risersXu (x:y:etc) = > let ss = risersXu (y : etc) > in case x <= y of > True > (x : (head ss)) : (tail ss) > False > ([x]) : ss > > risersMitchell :: Ord a => [a] > [[a]] > risersMitchell [] = [] > risersMitchell [x] = [[x]] > risersMitchell (x:y:etc) = if x <= y > then (x:s):ss > else [x]:(s:ss) > where (s:ss) = risersMitchell (y:etc)
Neither one of these functions is obviously safe. Xu uses head and
tail and ESC/Haskell to create a proof of their safety. The unsafe part
of Mitchell's code is the where
clause, and Mitchell also presents a
tool to prove this safe.
Our goal will be to write this function in a safe way, with a type signature that ensures the two properties we expect from the function. We also aim to do this without having to change the shape of the code, only the list implementation we are using.
The present unsafe operations in risersXu and risersMitchell depend on
the second property of the function: nonnull inputs generate nonnull
outputs. We could write a type for this functions using a trusted
library with phantom types for branding (paper mirror). This technique (called lightweight static capabilities) can do this
and much else as well, but since clients lose all ability to pattern match
(even in case), risers becomes much more verbose. We could also write
a type signature guaranteeing this by using GADTs. Without using one
of these, incomplete pattern matching or calling unsafe head
and tail
on the result of the recursive call seems inevitable.
Here's another way to write the function which does away with the need for the second property on the recursive call, substituting instead the need for the first property that no lists in the return value are empty:
> risersAlt :: (Ord t) => [t] > [[t]] > risersAlt [] = [] > risersAlt (x:xs) = > case risersAlt xs of > [] > [[x]] > w@((y:ys):z) > > if x <= y > then (x:y:ys):z > else ([x]):w > ([]:_) > error "risersAlt"
The error is never reached.
Though ensuring the second property with our usual types seems tricky, ensuring the first is not too tough:
> type And1 a = (a,[a]) > > risersAlt' :: Ord a => [a] > [And1 a] > risersAlt' [] = [] > risersAlt' (x:xs) = > case risersAlt' xs of > [] > [(x,[])] > w@(yys:z) > > if x <= fst yys > then (x,fst yys : snd yys):z > else (x,[]):w
It is now much easier to see that risers is safe: There is one pattern match and one case, and each is simple. No unsafe functions like head or tail are called. It does have three disadvantages, however.
First, the second property is still true, but the function type does not enforce it. This means that any other callers of risers may have to use incomplete pattern matching or unsafe functions, since they may not be so easy to transform. It is my intuition that it is not frequently the case that these functions are tricky to transform, but perhaps Neil Mitchell disagrees.
We could fix this by writing another risers function with
type And1 a > And1 (And1 a)
, but this brings us to
the second problem: And1 a
is not a subtype
of [a]
. This means that callers of our hypothetical other
risers
function (as well as consumers of the output
from risersAlt'
) must explicitly coerce the results back
to lists.
Finally, if we are wrong about the first property, and risers does
return an empty list for some nonempty input i
, then for any x
,
risers (x:i)
is __
, while risersAlt'
(x:i)
is [(x,[])]
. Thus, the equivalence of these
two function definitions depends on the truth of the second property
on the first function, which is something we were trying to get out of
proving in the first place! Of course, if we're interested in the correctness of
risersAlt'
, rather than its equivalence with risersXu
or
risersMitchell
, then it is not to difficult to reason
about. But part of the point of this was to get they compiler to do some
of this reasoning for us, without having to change the shape of the
code.
Let's write more expressive signatures using something we may have noticed when using GHCi. The only value of type forall a. [a]
(excluding lists of __s) is []
. Every value of type forall a . Maybe a
is Nothing
, and every forall a . Either a Int
has an Int
in Right
. Andrew Koenig noticed something similar when learning ML (archived), and the ST monad operates on a similar principle. (paper with details about ST, mirror)
> data List a n = Nil n >  forall r . a : (List a r)
The only values of type forall a . List a n
use the Nil
constructor, and the only values of type forall n . List a n
use the :
constructor.
> infixr : > > box x = x : Nil () > > type NonEmpty a = forall n . List a n > > onebox :: NonEmpty Int > onebox = box 1 > > onebox' :: List Int Dummy > onebox' = onebox > > data Dummy > >  This doesn't compile >  empty :: NonEmpty a >  empty = Nil ()
NonEmpty a
is a subtype of List a x
for all types
x
.
> data Some fa = forall n . Some (fa n)
>
> safeHead :: NonEmpty a > a
> safeHead x = unsafeHead x where
> unsafeHead (x : _) = x
>
> safeTail :: NonEmpty a > Some (List a)
> safeTail x = unsafeTail x where
> unsafeTail (_ : xs) = Some xs
Unfortunately, we'll be forced to Some
and unSome
some values, since Haskell does not have firstclass existentials, and it takes some thinking to see that safeHead
and safeTail
are actually safe.
Here is a transformed version of Mitchell's risers:
> risersMitchell' :: Ord a => List a n > List (NonEmpty a) n
> risersMitchell' (Nil x) = (Nil x)
> risersMitchell' (x : Nil _) = box (box x)
> risersMitchell' ((x { :: a }) : y : etc) =
> case risersMitchell' (y : etc) { :: NonEmpty (NonEmpty a) } of
> Nil _ > error "risersMitchell'"
> s : ss > if x <= y
> then (x : s) : ss
> else (box x) : s : ss
Since we can't put the recursive call in a where clause, we must use a case with some dead code. The type annotations are commented out here to show they are not needed, but uncommenting them shows that the recursive call really does return a nonempty lists, and so the Nil
case really is dead code.
This type signature ensures both of the properties listed when introducing risers
. The key to the nonemptyargumentsproducenonemptyresults property is that the variable n
in the signature is used twice. That means applying risersMitchell'
to a list with a fixed (or existential) type as its second parameter can't produce a NonEmpty
list.
> risersXu' :: Ord a => List a r > List (NonEmpty a) r
> risersXu' (Nil x) = Nil x
> risersXu' (x : Nil _) = box (box x)
> risersXu' (((x :: a) : y : etc) :: List a r) =
> let ss = risersXu' (y : etc)
> in case x <= y of
> True > case safeTail ss of
> Some v > (x : (safeHead ss)) : v
> False > (box x) : ss
Here we see that the type annotation isn't necessary to infer that risers
applied to a nonempty list returns a nonempty list. The value ss
isn't given a type signature, but we can apply safeHead
and safeTail
. The case matching on safeTail
is the pain of boxing up existentials.
This is the first version of risers
with a type signature that gives us the original invariant Xu and Mitchell can infer, as well as calling no unsafe functions and containing no incomplete case or let matching. It also returns a list of lists, just like the original function, and has a definition in the same shape.
With firstclass existentials, this would look just like Xu's risers
(modulo builtin syntax for lists). With let binding for polymorphic values, risersMitchell'
would look just like Mitchell's original risers, but be safe by construction. Let binding for polymorphic values would also allow nontrusted implementations of safeTail
and
safeHead
to be actually safe.
For contrast, here is a GADT implementation of risers:
> data GList a n where
> GNil :: GList a IsNil
> (:) :: a > GList a m > GList a IsCons
>
> infixr :
>
> data IsNil
> data IsCons
>
> gbox x = x : GNil
>
> risersMitchellG :: Ord a => GList a n > GList (GList a IsCons) n
> risersMitchellG GNil = GNil
> risersMitchellG (c : GNil) = gbox $ gbox c
> risersMitchellG (x : y : etc) =
> case risersMitchellG (y : etc) of
>  GHC complains, "Inaccessible case alternative: Can't match types `IsCons' and `IsNil'
>  In the pattern: GNil"
>  GNil > error "risers"
> s : ss >
> if x <= y
> then (x : s) : ss
> else (gbox x) : s : ss
This is safe and has its safety checked by GHC. It also does not require existentials, though when using this encoding, many other list functions (such as filter) will.
Now here is a lightweight static capabilities version of risers:
>  module Protected where
>
>  Export type constructor, do not export value constructor
> newtype LWSC b a = LWSC_do_not_export_me [a]
>
> data Full
> type FullList = LWSC Full
>
> data Any
>
> lnil :: LWSC Any a
> lnil = LWSC_do_not_export_me []
>
> lcons :: a > LWSC b a > FullList a
> lcons x (LWSC_do_not_export_me xs) = LWSC_do_not_export_me (x:xs)
>
> lwhead :: FullList a > a
> lwhead (LWSC_do_not_export_me x) = head x
>
> data Some' f n = forall a . Some' (f a n)
>
> lwtail :: FullList a > Some' LWSC a
> lwtail (LWSC_do_not_export_me a) = Some' (LWSC_do_not_export_me (tail a))
>
> deal :: LWSC b a > LWSC Any c > (FullList a > FullList c) > LWSC b c
> deal (LWSC_do_not_export_me []) _ _ = LWSC_do_not_export_me []
> deal (LWSC_do_not_export_me x) _ f =
> case f (LWSC_do_not_export_me x) of
> LWSC_do_not_export_me z > LWSC_do_not_export_me z
>
> nullcase :: LWSC b a > c > (FullList a > c) > c
> nullcase (LWSC_do_not_export_me []) z _ = z
> nullcase (LWSC_do_not_export_me x) _ f = f (LWSC_do_not_export_me x)
>
>  module Risers where
>
> lbox x = lcons x lnil
>
> risersXuLW :: Ord a => LWSC b a > LWSC b (FullList a)
> risersXuLW x =
> deal x
> lnil
> (\z > let x = lwhead z
> in case lwtail z of
> Some' rest >
> nullcase rest
> (lbox (lbox x))
> (\yetc >
> let y = lwhead yetc
> etc = lwtail yetc
> ss = risersXuLW yetc
> in if x <= y
> then case lwtail ss of
> Some' v > lcons (lcons x $ lwhead ss) v
> else lcons (lbox x) ss))
There is a good bit of code that must go in the protected module, including two different functions for case dispatch. These functions are used instead of pattern matching, and make the definition of risers much more verbose, though I may not have written it with all the olegmagic possible to make it more usable.
Out of the three, I think GADTs are still the winning approach, but it's fun to explore this new idea, especially since Haskell doesn't have traditional subtyping.
Posted by
Jim Apple
at
1/09/2008 11:02:00 PM
1 comments
Labels:
GADTs,
lightweight static capabilities,
polymorphism,
refinement types
Sunday, January 6, 2008
POPL 2008 and affiliated events
This week is POPL and affiliated events. I'll be at the first and third days of VMCAI, the Coq tutorial, and POPL itself. Unfortunately, I'll have to miss PEPM.
Posted by
Jim Apple
at
1/06/2008 05:36:00 PM
0
comments
Labels:
conferences
Thursday, December 27, 2007
A common language for dependentlytyped programming?
The Haskell project was begun in order to unify "more than a dozen nonstrict, purely functional programming languages". (mirror) We are rapidly approaching that many viable choices for programming with dependent types.

Epigram

ATS (successor to Dependent ML and Xanadu)

Agda (successor to Cayenne)

Ωmega

NuPrl

Twelf

Isabelle

Coq

Delphin

Concoqtion

Aldor

Guru (successor to RSP1)
And now, a list of caveats about the above list:

Some of the items on this list are theorem provers first and dependentlytyped programming languages second. Adam Chlipala argues that this is not such a problem for Coq.

This list does not include several languages that are new or new variants (PIE, Matita), not checked at compiletime (Sage), or not actively maintained (most proof assistants, Charity).

Some of these choices may not be real options for programing with dependent types. Twelf is designed for programming about programming languages, and, if I remember correctly, doesn't have parametric polymorphism because of something having to do with higherorder abstract syntax. Aldor can't understand most type equalities, making its dependent types somewhat weak, though Atypical and Aldor aimed to remedy this five years ago. Agda and Epigram have New Exciting Versions 2 coming out, so they suffer from the Osborne Effect.
Is it time yet to do anything about the cornucopia of options? When Haskell began, there were many similar options; the list above is heterogeneous. Is that because it's too early in to tell what the Right choices will be? Is this problem fundamentally harder than the one that inspired the Haskell precursors?
I'm not sure, though deciding how to ensure termination seems particularly tricky. Nonetheless, there are other decisions that aren't as directly related to the central dependent type issues, like syntax, functional purity (though this is related to termination), and editing environment. Maybe these issues will divide the ecosystem into a few niches, each dominated by one player. In the meantime, it's tough to choose.
Posted by
Jim Apple
at
12/27/2007 09:15:00 PM
9
comments
Labels:
dependent types,
Haskell
Friday, November 9, 2007
Lectures on the CurryHoward Isomorphism saves the day
Morten Heine B. SÃ¸rensen and Pawel Urzyczyn's Lectures on the CurryHoward Isomorphism (mirror 1, mirror 2, errata) provides answers to two questions I asked here before:
I speculated about a complexity problem and its relation to types. On page 104 of LotCHI (92, in print), the authors note that, in the simplytyped lambda calculus, a reduction of the type inhabitation problem to the type checking problem would show that P = PSPACE.
I also wondered if there was a Heyting algebra for higherorder polymorphic typed lambda calculi that we can use to show that some type is uninhabited. LotCHI section 12.2 gets part of the way there (The authors cover F2, I would need F3, I think.), with the answer that yes, there is such an algebra, but no, it doesn't make things so easy.
Posted by
Jim Apple
at
11/09/2007 06:11:00 PM
0
comments
Labels:
Heyting algebra
Tuesday, September 4, 2007
Fuzzy Logic and Inuitionistic Logic
I recently got rid of a pop science book on fuzzy logic that I purchased about eight years ago. When I first read it, it seemed to me to be not all that interesting, since it didn't seem like it created any particularly new ways of thinking about things. After all, calling someone "tall" is just shorthand, and it's not exactly revolutionary to call someone "somewhat tall" or "48% tall", since we can do that already when we talk about any continuous domain.
I put the book aside and figured that I either didn't get it, I had picked a bad explanation, or it truly was not something I would be interested in.
It occurs to me now that the idea of indeterminate truth should be very interesting to me, since I'm interested in intuitionistic logic, especially where provability differs from truth. The Stanford Encyclopedia of Philosophy indicates that the fuzzy logic I was thinking of is the "broad sense" or "fuzzy control", and that there's a whole other sense of fuzzy logic that is more closely related to my interests.
Posted by
Jim Apple
at
9/04/2007 07:17:00 PM
0
comments
Labels:
constructive logic,
fuzzy logic
Are there any ZeroKnowledge Proofs?
I speculated before about zeroknowledge proofs and existentials. What I had in mind was encoding knowledge hiding via types. I suspect this result would be more interesting than I had imagined. Here is my understanding:
Zeroknowledge proofs are interactive proofs, and are therefore in the class IP. This class is the same as PSPACE, which is not yet known to be distinct from P (though it certainly contains P). So, it's possible that P = IP = PSPACE, and ZK proofs can't hide anything the verifier couldn't calculate herself. In other words, we may one day discover that most of our ZK proof protocols are useless. (This is not the whole story, as there are lots of variations on interactive proving.)
So, if there were a correspondence between these ZK proofs and existential types, it would either settle the P = PSPACE problem or discover a problem in type theory that is equivalent to it.
Each of these seems quite unlikely to me.
Posted by
Jim Apple
at
9/04/2007 07:16:00 PM
1 comments
Labels:
zeroknowledge proof
Tuesday, August 28, 2007
Naming Large Integers is Naming Strong Logics
Scott Aaronson's Who Can Name the Bigger Number? was recently discussed on the programming subreddit. One of the comments noted the Big Number Duel, which was inspired by Aaronson's article. The winning entry in the duel was
The smallest number bigger than any finite number named by an expression in the language of set theory with a googol symbols or less.
I think we can do a little bit better without increasing from 10^{100} the number of symbols allowed:
The smallest number bigger than any finite number named by an expression in the language of set theory plus an axiom stating the existence of a proper class of inaccessible cardinals with a googol symbols or less.
Since this theory (ZFC+class of inaccessibles) is stronger than ZFC by itself, I suspect it will define larger integers in the same number of symbols. We could continue along this track by listing larger cardinals and stronger axioms of set theory. This game now becomes essentially Bram Cohen's MineField, in which we are not naming just numbers, but logics.
Besides the Big Number Duel, another test of the "name large integers" game was the C Bignum Bakeoff, which asked contestants to wite a C program that returned a large number. Ralph Loader won by to shoehorning a logic called the Calculus of Constructions into his entry, then simply saying something like "the largest integer definable in the Calculus of Constructions using less than a googol symbols".
The idea of naming finite numbers by extending the strength of not only the working notation but also the working logic is not new; see 0^{#} and 0^{†}.
Posted by
Jim Apple
at
8/28/2007 07:49:00 AM
0
comments
Labels:
cardinal numbers
Thursday, August 23, 2007
Static Analysis as Grammar Checking
When I explain what I intend to study to friends who don't program, I say something like the text below. I'm still working out the kinks in the analogy, but this is my starting point:
When we write, we sometimes make mistakes:
I am hunry when I wake up.
So we invented spelling checkers to catch this simple mistake. We did the same thing for computer languages a long time ago, to catch simple mistakes like spelling errors. And, just as we then invented grammar checkers to catch sentences like:
I am hungrily when I wake up.
that spelling checkers wouldn't catch, we also invented more sophisticated methods to catch more sophisticated mistakes in computer programs.
But grammar rules can sometimes restrict how expressive our sentences are if we wish to speak informally, or to be funny or poetic, or to quote someone famous:
You've got a friend in Pennsylvania.
What price justice?
Hand me a hypodeemic nerdle.
For computer programs, we try to characterize useful patterns (like "rhetorical question" or "implicit subject in imperative" in English) and build sets of rules called type systems. These sets of rules are designed to allow these patterns without allowing incorrect constructions. Type systems require all incorrect constructions to be corrected by the author before a program is run.
At the same time, we can build more permissive tools that look for particular patterns that are likely to be errors, like repeated words or a lowercase letter following a question mark. Anything flagged by the tool as possibly erroneous can be inspected by the author and corrected if necessary.
The goal of all of this work (called static analysis) is to catch bugs in software before it is shipped and bothers users or blows up aerospace equipment.
My goal is to build new static analysis tools to help prevent rocket explosions and Mac commercials.
Posted by
Jim Apple
at
8/23/2007 06:42:00 AM
0
comments
Labels:
grammar checking,
laypeople,
static analysis
Wednesday, August 22, 2007
Leibniz Equality, Decomposition, and Definability
In any interesting logic, there are propositions which are true but not provable. The same goes for intuitionistic logic. By the CurryHoward correspondence, there are types for which neither the types themselves nor their negations are inhabited in the typed lambda calculus. An example of this is Peirce's law: ∀ p, q . ((p > q) > p) > p
. We can use Heyting algebra to show that Peirce's law is not provable in intuitionistic logic. According to John C. Mitchell's Foundations for Programming Languages, problem 4.3.16, we could also show that this is not provable because:
. . . if there us a closed lambda term M:((a > b) > a) > a then there is a closed normal form (the normal form of M) of this type. Show that this implicational formula is not provable by showing there is no closed normal form term of this type.
In Emir Pasalic's PhD thesis, (end of section 4.2.2, page 80 in the PDF, 70 in print), he mentions that it is unknown whether the following type is inhabited:
∀ a, b, c, d . Equal (a,b) (c,d) > Equal a c
where type equality is defined by Leibniz equality
data Equal a b = Equal (∀ f . f a > f b)
To avoid dealing with datatypes or pairs, this can be expanded to
∀ a, b, c, d .
(∀ f . f (∀ r . (a > b > r) > r) > f (∀ s . (c > d > s) > s))
>
(∀ g . g a > g c)
In GADTless Programming in Haskell 98 (Section 6, top of page 13) Martin Sulzmann and Meng Wang call this the decomposition law.
Is there some Heyting algebra for higherorder polymorphic typed lambda calculi that we can use to show that this type is uninhabited?
Posted by
Jim Apple
at
8/22/2007 11:12:00 AM
14
comments
Labels:
Heyting algebra,
Leibniz equality
Tuesday, July 24, 2007
ε_{0} in an Ord instance
In my last post, I worked rather hard to approach ε_{0}, but there is a much simpler way to form a type with an Ord
instance that has order type ε_{0}, according to Trees, Ordinals and Termination (mirror)
import List
data Tree = Node  Tree [Tree]
instance Ord Tree where
compare Node Node = EQ
compare Node (Tree _) = LT
compare (Tree _) Node = GT
compare (Tree x) (Tree y) = compare (sort x) (sort y)
instance Eq Tree where
x == y = compare x y == EQ
Or, according to Ordinal Arithmetic with List Structures (mirror)
data Cons = Nil  Cons Cons Cons
instance Ord Cons where
compare Nil Nil = EQ
compare Nil (Cons _ _) = LT
compare (Cons _ _) Nil = GT
compare a@(Cons x' y') b@(Cons x y) =
case compare x' x of
LT > compare y' b
EQ > compare y' y
GT > compare a y
instance Eq Cons where
x == y = compare x y == EQ
These representations do not help in the general problem of representing all ordinals less than some ordinal κ, but they are much simpler than our previous method.
For trees with larger order types, see What's so special about Kruskal's theorem and the ordinal &Gamma_{0} (mirror)
Posted by
Jim Apple
at
7/24/2007 08:51:00 PM
0
comments
Monday, July 23, 2007
Ord, Countable Ordinals, and an Idea of sigfpe
This post is literate Haskell.
In the comments on my last post about ordinals, sigfpe
suggests the following type constructors:
type One = ()
type Two = Either One One
type N = [One]
type Nplus1 = Either N One
type NplusN = Either N N
type NtimesN = (N,N)
type NpowN = [N]
type NpowNplusN = Either NpowN N
The idea is that each of these has an Ord
instance
defined. I think he's suggesting that the Ord
instance
for each (ignoring __) has an order type that is the same as the
ordinal its name corresponds to. I have some quibbles with his
implementation, but I really like the idea, and so I'll expand on it
below.
I'll start with my disagreements. My first is with the order for
NpowN
. In NpowN
[[(),()]] > [[()],[(),()]] > [[()],[()],[(),()]] > . . .
or in simpler form
[2] > [1,2] > [1,1,2] > . . .
Since no ordinal has an infinite decreasing sequence, the order type
of [N]
is not that of an ordinal. For now, we'll just
disregard the []
type constructor.
My second quibble is with Times
. sigfpe does not make
this explicit, but the inference I drew from his examples was that we
could define Times
as
type Times = (,)
That doesn't quite work, though. Ordinal
arithmetic places the least significant part first, but the
Ord
instance for (,)
places the most
significant part first.
Now that we have these differences out of the way, let's get to the code.
We'll need these options for type class machinery later.
> {# OPTIONS fglasgowexts #}
> {# OPTIONS fallowincoherentinstances #}
Ala sigfpe:
> type One = ()
> type Plus = Either
> type Times = Flip Both
>
> data Both a b = Both a b
> newtype Flip f a b = Flip (f b a)
To define Times
, we use a pair datatype with the standard
Haskell ordering (see instance Ordinal2 Both
below), but
apply the newtype Flip
, which warrants its own
definition, as it will be useful elsewhere.
So far, we can only define finite ordinals. In order to represent
larger ordinals, we want to be able to represent an unlimited number
of applications of a type constructor to a type.
> data ApplyN f x = More (ApplyN f (f x))
>  Stop x
ApplyN
does this  after n applications of
More
, the type x
is wrapped n times in the
type constructor f
. Essentially, ApplyN f x
is f^{ω}(x).
Our first example of this will be adding one ω times.
> type N = ApplyN (Flip Plus One) One
This should look like (λ x . x + 1)^{ω} (1).
If ApplyN
works as advertized, N
should have
order type ω under a naive definition of
Ord
. Unfortunately, it does not. First of all, we can't
automatically derive Ord
for the type
ApplyN
, since it uses a type constructor, f
,
to build a new parameter. We will need
> class Ordinal1 f where
> comp1 :: Ordinal a => f a > f a > Ordering
to describe the contract f
must meet to make ApplyN
f x
ordered. Note that we use the class Ordinal
here, which is used only to avoid messing the with Eq
requirement on instances for Ord
.
> class Ordinal t where
> comp :: t > t > Ordering
The second problem with the naive instance of Ordinal
on
ApplyN
is that each application of More
leads to a new type closer to the type we're really talking
about. What I mean by this is that, for the case of N
, we
really want to talk about
Flip Plus One (Flip Plus One (Flip Plus One . . . .
Which we can't directly say in Haskell.
We want to be talking about the type that is the limit. Since we can't
do that, to compare two values, we'll compare them at the maximum of
their levels. For instance, if we want to compare the values Stop
()
and More $ More $ Stop $ Flip $ Left $ Flip $ Left
()
, both of which are of type N
, we'll inject the
value ()
into the higher level. To do this, we also need
f
to be have the property that we can inject into it a
value of its type parameter.
> class Container f where
> inject :: x > f x
To bring this back to the ordinals, we'll be representing
ω^{ω} as either and ordinal less than ω, or an
ordinal less than ω^{2}, or an ordinal less than
ω^{3} or …, and to compare two ordinals, we get a
representation of each in a common level, then compare them using the
comparison for ordinals less than the limiting ordinal of that level.
> instance (Container f, Ordinal1 f) => Ordinal1 (ApplyN f) where
> comp1 (Stop u) (Stop v) = comp u v
> comp1 (Stop u) (More v) = comp (Stop (inject u)) v
> comp1 (More u) (Stop v) = comp u (Stop (inject v))
> comp1 (More u) (More v) = comp1 u v
We've defined the ordering on ApplyN f
so that
comp (Stop x) (More (Stop (inject x))) == EQ
which is what the paragraph above says, but more concise. We've
defined an equivalence relation on values of the type ApplyN f
t
such that x == inject x
, so we're off to a good
start.
We'll also want inject
to be monotone, that is
x < y ==> inject x < inject y
and for inject
to send the order type of the domain not
just to some order in the codomain, but to the initial order
in the codomain. So, it's easy to write
> injectPlusL :: a > Plus a b
> injectPlusL x = Left x
or
> instance Container (Flip Plus b) where
> inject v = Flip (Left v)
Injecting a into a*b is trickier, since we need the second part of the
pair to be the minimal possible value in order for the injection range
to be the initial part of the order. For this we define
> class Least x where
> least :: x
> instance Least One where
> least = ()
> instance Least a => Least (Plus a b) where
> least = Left least
> instance (Least a, Least b) => Least (Times a b) where
> least = Flip (Both least least)
> instance Least b => Least (Flip Plus a b) where
> least = Flip (Left least)
> instance (Least a, Least b) => Least (Flip Times a b) where
> least = Flip (Flip (Both least least))
> instance Least x => Least (ApplyN f x) where
> least = Stop least
And we can now write the injection from a to a*b:
> instance (Least a) => Container (Flip Times a) where
> inject v = Flip (Flip (Both least v))
We also have the trivial
> instance Container (ApplyN f) where
> inject x = Stop x
Let's write the base instances for Ordinal in as general a way as possible:
> class Ordinal2 f where
> comp2 :: (Ordinal a, Ordinal b) => f a b > f a b > Ordering
> instance Ordinal2 Plus where
> comp2 (Left _) (Right _) = LT
> comp2 (Right _) (Left _) = GT
> comp2 (Left x) (Left y) = comp x y
> comp2 (Right x) (Right y) = comp x y
> instance Ordinal2 Both where
> comp2 (Both p q) (Both x y) =
> case comp p x of
> LT > LT
> GT > GT
> EQ > comp q y
> instance Ordinal2 f => Ordinal2 (Flip f) where
> comp2 (Flip x) (Flip y) = comp2 x y
>
> instance Ordinal () where
> comp _ _ = EQ
And the conversion instances between the nary Ordinal instances:
> instance (Ordinal1 f, Ordinal a) => Ordinal (f a) where
> comp = comp1
> instance (Ordinal2 f, Ordinal a, Ordinal b) => Ordinal (f a b) where
> comp = comp2
> instance (Ordinal2 f, Ordinal a) => Ordinal1 (f a) where
> comp1 = comp2
Now let's do some examples. First we check that Ordinal
is
defined for N
:
> n_ok = comp (undefined :: N) (undefined :: N)
ω^{ω}:
> type NpowN = ApplyN (Flip Times N) One
> npowN_ok = comp (undefined :: NpowN) (undefined :: NpowN)
We can now plug in NpowN
into its own definition to get
ω^{ω} * ω = ω^{ω + 1}
> type NpowNplusOne = ApplyN (Flip Times N) NpowN
> nPowNplusOne_ok = comp (undefined :: NpowNplusOne) (undefined :: NpowNplusOne)
or ω^{ω * ω} =
ω^{ω2}:
> type NpowNpow2 = ApplyN (Flip Times NpowN) One
> npowNpow2_ok = comp (undefined :: NpowNpow2) (undefined :: NpowNpow2)
The second method looks more powerful, but continuing it keeps us
below ω^{ω&omega}, so let's use the
first method of substituting back in to see if we can get further.
ω^{ω + 2}:
> type NpowNplus2 = ApplyN (Flip Times N) NpowNplusOne
> nPowNplus2_ok = comp (undefined :: NpowNplus2) (undefined :: NpowNplus2)
We can now plug ApplyN
back into itself, and extend this
to ω^{ω + &omega} = ω^{ω * 2}:
> type NpowNtimes2 = ApplyN (ApplyN (Flip Times N)) One
> npowNtimes2_ok = comp (undefined :: NpowNtimes2) (undefined :: NpowNtimes2)
Similarly, ω^{ω * 3}:
> type NpowNtimes3 = ApplyN (ApplyN (Flip Times N)) NpowNtimes2
> npowNtimes3_ok = comp (undefined :: NpowNtimes3) (undefined :: NpowNtimes3)
ω^{ω * ω} =
ω^{ω2}, which we apready saw before,
can also be written as
> type NpowNpow2' = ApplyN (ApplyN (ApplyN (Flip Times N))) One
> npowNpow2'_ok = comp (undefined :: NpowNpow2') (undefined :: NpowNpow2')
ω^{ω2 * ω} = ω^{ω3} =
> type NpowNpow3 = ApplyN (ApplyN (ApplyN (Flip Times N))) NpowNpow2
> npowNpow3_ok = comp (undefined :: NpowNpow3) (undefined :: NpowNpow3)
ω^{ωω}:
> type NpowNpowN = ApplyN (ApplyN (ApplyN (ApplyN (Flip Times N)))) One
> npowNpowN_ok = comp (undefined :: NpowNpowN) (undefined :: NpowNpowN)
ω^{ωω + 1}:
> type NpowNpowNplusOne = ApplyN (ApplyN (ApplyN (ApplyN (Flip Times N)))) NpowNpowN
> npowNpowNplusOne_ok = comp (undefined :: NpowNpowNplusOne) (undefined :: NpowNpowNplusOne)
We can continue this process up to, but not including,
ε_{0}. We might be able to go further than this, but
I'll save that for later.
Thanks to sigfpe for the idea!
Posted by
Jim Apple
at
7/23/2007 06:30:00 PM
3
comments
Labels:
ordinals
Friday, July 20, 2007
More Proof of Referential Transparency
Previously, I talked about proving referential transparency. The authors of A Persistent UnionFind Data Structure (other version) use Coq to prove imperative functions referentially transparent, and they point out the work of Harold Baker, who designed fast persistent arrays (and here), which they also prove to be referentially transparent. These arrays are even available in Haskell, though wrapped in the IO monad.
The authors of the unionfind paper say that their structure is not "purely applicative". I don't actually know what that means, but I suppose it means it's not strictly functional, since it requires inplace modification.
In any case, there are now at least four cases for being able to prove referential transparency so we can use it without wrapping everything in the IO monad:
 Memoization  For memoization in (not just for) Haskell, see this paper by the Simons and Conal Elliott. (mirror 1, mirror 2)
 Diff Arrays
 Splay Trees
 UnionFind
Any other ideas?
Posted by
Jim Apple
at
7/20/2007 02:55:00 PM
0
comments
Labels:
Coq,
referential transparency
Saturday, June 30, 2007
Constructability, Uncountability, and ωHaskell
My last post about the uncountable ordinals ended up on the programming subreddit, where someone who I think is Lennart Augustsson said:
Wow, I didn't realize you could do this. Now I have to wrap my brain around in what sense you can represent uncountable ordinals in Haskell since there are only a countable number of Haskell programs. It must be something like constructive reals...
I think he's exactly right, and I think it has something to do with V=L, models, and how many functions are definable in Haskell. I'm going out of my comfort zone on this, so please correct me if I make any incorrect or nonsensical statements. I'll use as my example the powerset of the natural numbers, which is certainly uncountable, and already definable in Haskell with
data Nat = Z  S Nat
type Powerset t = t > Bool
From the viewpoint of ZFC, there are an uncountable number of ZFC functions from the natural numbers to the booleans. From the viewpoint of Haskell, there are an uncountable number of Haskell functions from the natural numbers to the booleans. But, from the world of ZFC, the number of Haskell functions from the natural numbers to the booleans is countable: we can simply order them by their definitions lexicographically. Since every function definable in Haskell has a finite definition and the cardinality of a countable union of finite sets is countable, the function definitions of type Nat > Bool
are countable.
{f : f < (N > 2)_{ZFC}} >_{ZFC} N
{f : f < (N > 2)_{Haskell}} >_{Haskell} N
{f : f < (N > 2)_{Haskell}} =_{ZFC} Σ_{n < N} all nbit functions from N > Bool = N
From the viewpoint of Haskell, though, this can't be proven; functions are opaque, so we have no way to order or count them. This might not be true if we use some of the unsafe features of Haskell that would allow us to look at the actual assembly corresponding to a function.
We could, I suppose, imagine a programming language which was not limited to finitelength function definitions. ωHaskell would not only have types with seemingly uncountable numbers of inhabitants, but each inhabitant would also be definable. Yuk!
Posted by
Jim Apple
at
6/30/2007 09:02:00 AM
5
comments
Labels:
constructive logic,
countability,
ordinals,
V=L
Region, Effect and Closure Inference, Part 2
Michael Stone was correct in his comments to part 1. Ben Lippmeier elaborated to me in an email:
In those slides,
(%) is the kind of regions.
(!) is the kind of effects.
($) is the kind of closures.
The symbol represents the kind, as well as the name space of the R/E/C variables.
…
I don't have a paper for this stuff at the moment, it's still under development  but the type system is based on some other work that has been around for a while:
The Type and Effect Discipline
JeanPierre Talpin, Pierre Jouvelot
Polymorphic Type, Region and Effect Inference
JeanPierre Talpin, Pierre Jouvelot
Polymorphic type inference and assignment
(For the closure typing)
Xavier Leroy
The type system is a merge of the ones described in the above two papers, but with some extra stuff added on to make it practical to use in a real compiler.
Posted by
Jim Apple
at
6/30/2007 08:47:00 AM
0
comments
Labels:
effect inference,
effects
Subscribe to:
Posts (Atom)