Conor's Rule?
Edwin Brady writes in "How to write programs in two easy steps", that
I think it's also an area where dependently typed languages like Epigram will shine, because it will one day be possible to write verified interpreters for domain specific languages …I immediately thought "I can write dependently typed programs right now with GADTs, and I can do so in GHC Haskell, Scala, or the upcoming C#". This is, of course, only mostly true, since none of these does termination checking, and the syntax is awkward.
The same blog post later references Greenspun's Tenth Rule:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lispand I thought that my programs follow a similar pattern with dependent types: I'm always simulating them informally. Based on Section 6.1: "The Trouble with Faking It" in Conor McBride's Faking It (Simulating Dependent Types in Haskell) (mirror), I wonder if there's any truth to something we might call
- Conor's Rule
- Any sufficiently complicated Haskell or ML program contains an ad hoc, informally-specified, bug-ridden, half-completed simulation of dependent types.
1 comment:
Ever use Coq? I really like it as a total functional programming platform with a serious dependent type system.
I haven't really gotten good enough at Epigram to have an opinion.
Post a Comment