## The use of closed nonterminals

Motivated by a modular well-definedness analysis for higher-order attribute grammars with forwarding that I developed (for which I just noticed I really need to get a pdf up), I added the concept of “closed nonterminals” to Silver this last year.

Nonterminals in Silver allow extensions to introduce new productions, but only so long as those productions forward back down to the host language.  Extensions are then free to introduce new synthesized attributes, confident that they can always be evaluated via forwarding.  It’s a nice way of solving a very, very strong form of the expression problem.

Closed nonterminals are introduced because we wanted to sometimes make the dual trade-off.  Instead, extensions are allowed to introduce new attributes, but only so long as those attributes have a default value.  Then, new (not necessarily forwarding) productions can be introduced by extensions, confident that if they lack an explicit equation, there will be a default value instead.

In some sense, this switches from a “datatype-like” form of extension to an “object-like” form of extension.  We were motivated to introduce this to handle transformations from concrete syntax tree to abstract syntax trees.  Without closed nonterminals, you’d have a really stupid time trying to write what your new concrete syntax productions have to forward to.  When they’re closed, you don’t have to bother with that, and instead you just can’t add any new truly meaningful attributes to the concrete syntax tree… which is fine because we don’t want to.

At the time of introduction, I also noticed one other specific usage for closed nonterminals… as part of an environment pattern.  We’d write something like:

```nonterminal Defs with ...
closed nonterminal Def with ...```

To allow extensions to introduce entirely new kinds of definitions into the environment.

Since then, I’d started to feel the urge to use them a little more often, nearly always the same case of handling lists of things: Defs with Def, top level Dcls with Dcl, etc.  However, I had resisted so far, because I wasn’t sure why this would be okay.  Perhaps down the road, we’d suddenly discover this was the wrong decision?  I’d want an explanation for why this is right, and not just convenient now, before I’d start to allow it to be used.  There was a good story for concrete syntax, and for the environment pattern, but not so far for much else.

Simply choosing to make all “list element” nonterminals closed all the time is definitely wrong: Stmts, for example, very very much does not mean Stmt can be a closed nonterminal.  It’d eliminate the ability of an extension to introduce any sort of new analysis.

But, I think I’ve come up with an explanation: any list that exists to only permit an unordered collection of syntax can have its “list element” be a closed nonterminal.  A  good example is modifiers: “public static final,” “final public static,” who cares? The point is simply to recognize what’s there.

I’m still not completely convinced this is sufficient justification for using a closed nonterminal, but it appears to be a necessary condition for some particular class of valid uses.  Modifiers seems very safe, but the idea of making top-level declarations (which meet this criterion, at least for Silver itself) closed still makes me nervous…  The trouble is, we’re running into some extensions we want to write that demand it be closed, but I don’t yet know what sort of interesting extensions would be precluded by doing so.  Which, of course, isn’t evidence that there aren’t any. :)

## Extending C

I’m looking for a few good language extensions to C.

I want to demonstrate the practicality of composable language extensions, and to do that, I’m going to extend C with a bunch of neato stuff.  But I haven’t yet determined just what neato stuff that will be.

I have a number of good ideas motivated by C libraries.  For example:

• MPI. Parallelism is always good, and there should be quite a lot of lowish-hanging fruit for a language extension for MPI.
• Bison. Parsing is really well-addressed by DSLs, so this is an obvious choice.
• Lua. There’s a lot of regularity to the interface between these languages that could be very well addressed by a language extension.  This isn’t extending C with Lua, but rather an extension to C that makes using the Lua C API dramatically easier.
• OpenCL. I haven’t fully investigated OpenCL yet, but I very much imagine this will be as amenable to a language extension as MPI is. And GPU programming is another obvious hot topic.

However, all of these involve potentially complicated design work.  In the interest of laziness time-efficiency (and as a bonus, to speak more directly to the research community,) I’d instead like to implement language extensions others have already pioneered, but implement them as composable language extensions.  This way, the focus is on the advantages of this approach, and the particular extensions are already regarded as interesting.  Of course, this leaves me adrift: there are a lot of papers that extend C.  And I haven’t really paid attention to this area in the past, so I’m somewhat clueless as to which are actually regarded as good.

So, dear #lazyweb, what are your favorite papers that extend C?

## An algebra of language extension

I recently discovered (evidently I’m slow in keeping up with recent conferences) a nice paper from LDTA ’12 called “Language composition untangled [pdf].” I’m going to pull two concepts out of that paper, and talk about them a little bit more.

They (Erdweg, Giarrusso, Rendel) introduce two operators for talking about language composition.  (Well, more really, but I’m just interested in these two.)  The first they called “language extension” and they write $H \triangleleft E$ to represent the language formed by composing a host language H with an extension E that was written specifically for H.  There are two algebraic properties to note: it’s not commutative and it’s not associative.

The second operator they call “language unification” (I really don’t like this term, but oh well!) and they write $L_1 \uplus_g L_2$ to represent the language $L_1$ composed with a separate language $L_2$ using the “glue code” g.  There are two algebraic properties to note: it’s not associative, but it is commutative.

So far we have very little theory here.  We can at best say that $L_1 \uplus_g L_2 = L_2 \uplus_g L_1$. Not very exciting.  So let’s take a detour and ask ourselves “well, what do we want to say?”

Well, let’s start by relating the two operators.  Can we say they’re the same in some fashion? Well, that depends.  We can write something like $L_1 \uplus_g L_2 = L_1 \triangleleft (L_2 \cup g)$.  In essence this says we can turn a language unification into an extension by merging in the glue code with one or the other languages, which seems logical.  However, this introduces a new operator: that informal little $\cup$ that talks about merging in the glue code to a language.  So to admit this kind of equation, we need to force a property onto our language composition framework.  We need to be able to manipulate the specifications in this way, and consequently the specifications can only say things that CAN be manipulated in this way.  Okay, what else might we want?

Suppose we have two language extensions: $H \triangleleft E_1$ and $H \triangleleft E_2$.  We might want some ability to obtain a language with both extensions at the same time.  For example, $(H \triangleleft E_1) \uplus_g (H \triangleleft E_2)$.  That’s nice. We can write that.  But it does leave us with a little problem: we already had the two extended languages that worked just fine, but now all of a sudden to get them both to work together we need this pesky g glue code to fix up the composition.  We can’t get a language with both extensions without it.  Well, let’s assert something about our language composition framework again:  if we have $H \triangleleft E_1$ and $H \triangleleft E_2$, let’s say we can obtain $H \triangleleft (E_1 \uplus_{\emptyset} E_2)$.  That is, for two extensions to a common host, we do “language unification” without any glue code ($g = \emptyset$)

There’s a nice property that $\uplus_{\emptyset}$ has, too: it’s associative.  In fact, it seems that it is also associative with any “language unification” operation: $(X \uplus_g Y) \uplus_{\emptyset} Z = X \uplus_g (Y \uplus_{\emptyset} Z)$ seems a little bit weird, but I think that’s because $\uplus_{\emptyset}$ is just plain weird in the first place.  It kinda only makes sense if you think about it in terms of its operands having some common host language, somewhere deep down.  Certainly, that’s the only way we’ve said you can obtain it.  Anyway, we also have a nice property that $L \uplus_{\emptyset} L = L$, which doesn’t necessarily hold if $g \neq \emptyset$.

Okay, what can we do with these properties?  Well, for one we can compose any number of extensions to H$H \triangleleft (E_1 \uplus_{\emptyset} E_2 \uplus_{\emptyset} E_3)$ is legit.  That’s neato.

Let’s look a little bit at $(H \triangleleft E_1) \uplus_{\emptyset} (H \triangleleft E_2)$.  We can rewrite this as $(H \uplus_g (E_1 \setminus g)) \uplus_{\emptyset} (H \uplus_h (E_2 \setminus h))$. Taking advantage of the associativity of $\uplus_{\emptyset}$ and commutativity of $\uplus_x$ in general, we can produce $((H \uplus_{\emptyset} H) \uplus_g (E_1 \setminus g)) \uplus_H (E_2 \setminus h)$, which we can rewrite back to $(H \triangleleft E_1) \triangleleft E_2$.  And we can flip it the other way around, too.  Cool!

Let’s try something different, starting again with $(H \uplus_g (E_1 \setminus g)) \uplus_{\emptyset} (H \uplus_g (E_2 \setminus h))$, we can do the rewrites in a different direction and get $H \uplus_g (H \uplus_h ((E_1 \setminus g) \uplus_{\emptyset} (E_2 \setminus h)))$.  Let’s take some small liberties to write that as $H \uplus_{g \cup h} (E_1 \uplus_{\emptyset} E_2 \setminus (g \cup h)) = H \triangleleft (E_1 \uplus_{\emptyset} E_2)$.  Awesome!  These things that seem to be the same, are!  Well… with a few liberties about how we treat that glue code, that is.

So it would seem that capability to obtain $H \triangleleft (E_1 \uplus_{\emptyset} E_2)$ lets us “lift” an extension to “depend” on another extension if we so choose.

Let’s get crazier.  Suppose we have $H \triangleleft E_1$, $H \triangleleft E_2$, $(H \triangleleft E_1) \triangleleft E_3$, and $(H \triangleleft E_2) \triangleleft E_4$.  Can we compose them all?  Well, we can certainly produce $H \triangleleft (E_1 \uplus_{\emptyset} E_2)$. And from there we can use to above to obtain $(H \triangleleft E_1) \triangleleft E_2$, and of course with that we can obtain $(H \triangleleft E_1) \triangleleft (E_2 \uplus_{\emptyset} E_3)$.  Reversing direction, we can get right back to $H \triangleleft (E_1 \uplus_{\emptyset} E_2 \uplus_{\emptyset} E_3)$, and from there it’s obvious we’ll be able to manipulate in $E_4$ as well.

Alright, so what’s the point of all this?  Well, I found it interesting to try to think about what sort of properties we need to be able to do the kind of complicated composition in the last paragraph.  I’m not sure I’ve formulated things well enough yet, but so far we needed:

• The ability to produce $H \triangleleft (E_1 \uplus_{\emptyset} E_2)$ from $H \triangleleft E_1$ and $H \triangleleft E_2$.
• The equalities $H \triangleleft (E_1 \uplus_{\emptyset} E_2) = (H \triangleleft E_1) \uplus_{\emptyset} (H \triangleleft E_2) = (H \triangleleft E_1) \triangleleft E_2 = (H \triangleleft E_2) \triangleleft E_1$, which although they do not even mention it, seem to boil down to some ability to manipulate glue code like a set.

Our little attribute grammar playground, Silver, gets the second property easily, since for attribute grammars you pretty much can define $X \triangleleft Y = X \cup Y$ and $X \uplus_g Y = X \cup Y \cup g$. Although this does mean extensions (and glue code) can’t go do anything like delete from a host language. Or overwrite equations from the host language. (Well, we never had that in the first place! Hooray!)

The tricky part is the first property. That composition has to you know, work. And like, not just be a broken compiler.  Getting that first property will be my thesis. Whheeee!

## Violating encapsulation — can it work out?

If you ask ten people to define object-oriented programming, you’ll probably get ten different answers.  While it’s probably not one of the defining features of OO (my answer: dynamic dispatch, object identity, and yes, mutable state), certainly one of the central concepts is encapsulation.

It’s pretty easy to see why encapsulation is useful generally (although it can become muddled when people start piling on getters and setters.) So one of the major puzzlers that an OO programmer might experience when trying to actually use a functional language (like Haskell, OCaml, and F#) is that you don’t see encapsulation at first. In fact, algebraic data types seem to deliberately and completely violate encapsulation!

And… it all seems to work out? How should the OO programmer make sense of this, when it’s often deemed extremely important in OO land?

## Meaningful types

One possibility is simply that programmers are encouraged to give types stronger meanings.  Ask programmers how to represent money. A decent programmer is going to say “an integer.”  A poor programmer (for most domains) will say “a float” or “a string!”  A Haskell programmer might reply “Cents.”

`newtype Cents = Cents Int`

Since the immediately googlable explanations of newtype are terrible, I’ll given one of my own here.  Typically, any given type from a library has awful semantic content.  An ‘Int’? Of what? In what units? The type doesn’t say.  An ‘[Int]‘? What does that mean, besides a list of integers?  The more you have to rely on context to get meaning, the more likely a programmer is to make mistakes from missed or forgotten contextual information.

Newtype lets you create a type, that you can then decree to have some semantic content. ‘newtype Cents = Cents Int’ and then document the meaning of the type as “cents in US currency.”  This new type then carries it meaning explicitly, instead of implicitly as context the programmer must remember.

And it’s not just a type alias. You’re forced to do a little song and dance to make the type checker happy, reminding yourself of this meaning everywhere you use it by wrap/unwrapping (zero runtime cost, but you’re not supposed to care yet) what you’re working with.  For example, spot the errors in the following context-less piece of code:

```addDollars :: Cents -> Int -> Cents
addDollars (Cents cents) dollars = Cents (cents + dollars)```

But consider if it had been

```addDollars :: Cents -> Int -> Cents
addDollars (Cents asdf) qwerty = Cents (asdf + qwerty)```

Choosing descriptive variable names is wise. But choosing descriptive types may be more important. All we’re missing is a Dollars type, and the awful variable names here wouldn’t have even mattered.

The benefits of this start to compound significantly once you’re working with data that’s aggregated together somehow.  Ponder the difference between these two functions:

```newRect :: int -> int -> int -> int -> Rectangle
newRect :: Point2D -> Width -> Height -> Rectangle```

It’s compiler-checked documentation. It immediately raises a possible code smell any time you see ‘newRect (Point2 x y) (Width h) (Height w)’ in your code. It’s damn invaluable as a refactoring tool. And so on.

## No null, yes garbage collection, no mutation

I think, in this century, most people can appreciate the advantages of garbage collection. Without it, ANY sort of interesting data structure must somehow suffer from managing state: whether any given pointer is currently valid, invalid, or null. “Who” is responsible for freeing a pointer, once allocated, that implicitly propagates through the program. And so forth. Obviously, not having GC increases the frailty of the data representation, and it’s then extremely helpful to encapsulate the data to help manage these invariants. (Of course, then there are things like the various smart pointers in C++ that can help deal with these small concerns.)

But, even in a language like Java, any reference may still be null, and this still presents many of the same state-managing problems to a data structure that might not need it otherwise.  After all, if full implementation details are exposed, then the user is perfectly capable of constructing a data structure with nulls in inappropriate places. And frequently, there’s no indication of what’s an appropriate or inappropriate place, except that some will raise null pointer exceptions sometimes and others will be handled just fine.

Or worse, given that OO languages typically have unrestricted mutation, they might modify the content of a data structure in any way — and this part is important — invisibly.  Call a function, pass it a reference to a data structure. Will that data structure get modified?

In a functional setting, of course not!  Even in the ML family, which does not require stateful functions to involve monads or anything of the sort, you know that a value of a type without any ‘ref’s in it won’t change.  In Java it’s hopeless, but in C++ you might get away with ‘const.’  If your users don’t hate you for using const, because there’s so much const-incorrect code out there they have to work with.  Oh well, they can just cast it away, hahahaha… sigh.

The end result of all this is that the values of a type in a functional language are actually just values, and not state representations.  Yes, it’s still quite possible to construct nonsense.  But, it’s a lot more obvious when and where values are getting constructed.  There are stronger invariants on what a value might look like (we only have to worry about the values that a value is composed of, not null references, not pointer ownership). And with newtype, we can be sure those values have an obvious meaning to the programmer who is using or constructing them.  To construct nonsense, you pretty much have to know you’re constructing nonsense.

## Representation

Another somewhat interesting aspect of why this works is that (duh?) the features to make it work are also present.  Any time you have a data structure where a value may be one of several different constructions, an OO language forces you to make use of subtyping (each construction is a child class.)  And your base type typically has few internal details, instead there’s simply a set of functions the subtypes should appropriately implement, and you don’t know which subtype it really is, so you don’t have access to those details.  You’re almost forced into encapsulation.

Algebraic data types, on the other hand, fix the list of constructors, so the details are right there. And pattern matching actually makes using those details quite pleasant.  I’d wager that nearly all situations where we see class with a lot of getters/setters is an example of a situation just screaming out for data types instead of objects.

## Finally, we do encapsulate!

Because it is actually a good idea!  The major difference seems to be that the OO-style forces you to encapsulate immediately, which isn’t always the best thing.  All manner of crap has been invented in the OO world to violate encapsulation in limited ways.  Java’s “default” protection modifier, unfortunate things like ‘friend’ classes, sometimes reflection, and so forth.  The public method or class with documentation that just says “internal API – do not use” is hardly unheard of!

The unit of encapsulation isn’t always one type! So these things become necessary.

In a functional setting, you work with internal details exposed… until you hit a boundary (often a module) where you decide it’s time to hide it.  Then you have a type class, or a module signature (depending on whether you’re in a Haskell-like or ML-like language. Have you ever been in a language?) that defines an interface, much as you would see in an OO interface.

Then, your module hides the implementation details. Users of the module don’t see what they shouldn’t see. Tadaa!

## The bottom line

A common theme celebrated by functional programmers is that their programs “just work” after the type checker is satisfied.  I think the biggest part of this is controlled state, and I think many would agree with this.

But I think the second biggest part of it is this exposed representation — internally to a module, of course.  One of the problems with encapsulation and abstraction is that we nearly always end up making abstractions leaky.  Real programs are messy, there are unfortunate corner cases, programmers makes mistakes, good design is hard, and so on and so forth.

When you’re working in a module with exposed representation, you’re basically programming at the “metal” of what you’re really doing.  (Pause a moment to let the C programmers in the room stop laughing at my use of that metaphor for writing Haskell, please.)  There aren’t any leaky abstractions, except perhaps those from other modules you’re using.  What it is, is what it is.  It’s not bytes and pointers, and you may not know what the exactly assembly generated is, but you can have an equally good (or better!) idea of what’s going to happen when you run it (if you’re willing to temporarily ignore a couple of important operational details, like how much memory it will consume.  Correctness first, optimization second!)

As a result, not only are you less likely to forget some detail and mistakenly trust a leaky abstraction, because what you’re actually doing is right there in front of you, but the language has been designed around helping you remind yourself of all those details.

So that’s 1400 words I hope is interesting to someone.

I’ve been thinking about all this recently because I’m working on extensible languages, specifically the meta-programming language Silver. (We’re also in the very early stages of building a google code project site for Silver. That link probably isn’t useful now, but probably by 2012…) One of the potential pitfalls is that, to really allow a compiler to be extensible, a lot of the implementation details need to be exposed so an extension can… well, do it’s thing with them.  So a major question we have to be able to answer is: why isn’t this just a completely horrible spaghetti code scenario?

My hope is that we can get enough compiler- and language-enforced properties that:

• We can completely and totally rule out MANY serious ways in which an extension can misbehave. Much like functional languages do with purity and data types, but even stronger properties are needed.
• We can be confident that, for the few ways that are impossible to rule out, we get the same kind of “experience” that functional programmers do living dangerously with internal implementation details.  That is, making sure those details are brought to the programmer’s attention when coming anywhere close to them.
• We can automatically gather ALL relevant information about a point of possible misbehavior, not missing any important details, so an IDE, for example, can display it for the programmer. Or just so the programmer has some straightforward idea of where to look for these things themselves. Or at the very least so we have some hope of categorizing them so they can be tackled by a few standard coding conventions.
• We can still do encapsulation when appropriate.

Of course, the real goal is to make language extension seamless and problem free for the user of the extended language. If the user can’t be sure that trying to use extensions X and Y together won’t make the compiler explode, it’s probably game over for trying to convince anyone to use extensions at all.

But much like laziness was a kind of “forcing function” for purity in Haskell, I suspect making sure things can’t devolve into spaghetti code will be a valuable “forcing function” for ensuring language extensions will “just work” for the end users.

Okay, now that’s 1900 words I hope is interesting to someone.

## Project hosting with test framework integration?

So I’ve been looking around at open source project hosting websites, and there’s one thing I think is really, desperately missing from nearly all of them (github, bitbucket, and google code. All look equally good to me.)

Test framework integration.

Now, I realize they can’t offer to run the tests for me (too limited/abusable), but why not just write down a format, and let me upload test results with an API? That’d still provide:

• A nice web interface to test result information. (“4 failures” *click* “oh, those.”)
• History of failures / fixes; differences between branches.
• One click link to the test code that failed.
• Maybe some integration with issue tracking (by noting in the test metadata what bugs they’re related to)
• Even better: provide not just for true/false success/failure, but also numerical data, like performance, so benchmarks can be tracked over time.
• All kinds of other wonderful things I’m too lame to think of.

Of course, there’s another problem: I can’t even find any stand-alone tools that let me do this.  There’s all kinds of tools out there for bug/task management; where are the test result database tools?

Isn’t this a dead-obvious thing every project should have these days, like version control and issue tracking? What am I missing?

(I know TAP exists, but it doesn’t really provide any of the necessary metadata in its protocol, and none of the TAP consumers seem to provide any of the above features, that I saw.)

Update: It appears the keywords I was missing were “continuous integration.” I’ve now found TeamCity, Husdon/Jenkins, etc. So shift the question: which of these do you prefer, and which one accepts some reasonable format of test output that I can give it?  My tests are in our own test framework by necessity since they’re all in our own language…

Although, the above still stands: those project hosting websites should offer an API that a continuous integration server can talk to.

## Two extremely interesting LtU discussions

What needs to be done? Some discussion on programming languages research in general. I like it because some of the things people brought up seemed totally out of left field to me, but still quite interesting. (For example, the education aspect of programming languages…)

The GNU extension language. Some interesting discussion on language extension, in general.  I need to go through this again more carefully, because I’m always on the lookout for good pithy rebuttals to the “why not macros?” question besides my current standard “because we want to write in languages other than lisp scheme racket,” and there seemed to be a fair amount of skepticism of macros as a solution there.

## A design variation on “Perlis languages”

There has been some recent discussion on languages people should learn if they want to broaden their understanding of programming generally.  For example, Fogus, and Eugene Wallingford in response.  There has also been a recent burst of posts to proggit on API design (some of which I find questionable, by the way.)

I’d like to attempt a merger of these two ideas: what should people learn to improve their ability to design programs and APIs, beyond simply looking at good or bad examples, or learning specific languages?  Instead, I think there are some interesting concepts people should be familiar with:

Objects. I’ll start with the classic that I assume everyone has already learned something of, but here is a small list of things you should know that many do not:

• When inheritance is actually called for, and when you should avoid it.
• “Generics” and subtyping, in particular: covariance, contravariance, invariance. (Types are important here, so I suggest using a statically typed OO language for this, also avoiding C++ because templates are untyped. If this is all perplexing to you, I’ve heard good things about the book “Effective Java” but I haven’t read it myself.)
• (There is much more to list about doing OO well, but part of this whole exercise is to give people the tools to come up that list themselves, so I will stop here for now.)

Datatypes. Here I refer to the usual data type declarations of functional languages. Some examples include Haskell (data), ML (datatype), Scala (case class), F#/OCaml (type), and of course many others.  Datatypes and objects are actually dual to each other:

 constructors functions key operation objects unlimited fixed (virtual functions) dynamic dispatch datatypes fixed unlimited pattern matching

Things to understand:

• How the datatype world gets by without ‘null’.
• How datatypes and objects can be transformed into each other. (huge hint: datatypes are to existential types as objects are to the visitor pattern.)
• How damn awesome pattern matching is. (For example, “multiple dispatch problem? Pffft!”, nested patterns, etc.)
• Can you think of any object hierarchies in code you’ve written that would have been vastly better represented as datatypes, instead? (Obvious choice: anywhere you’ve used the visitor pattern. Next choice: anywhere you would have used the visitor pattern, but all that extra syntactic noise made it not worthwhile. But beyond that?)
• Further reading: On understanding data abstraction, revisited.

“Complete types.” I’ve completely made up this term, so bear with me for a moment. There are a number of types where it doesn’t matter whether we choose to represent them as objects or datatypes because both the list of constructors and the list of necessary functions are fixed. That is, all other functions can be written in terms of the fixed list. (To really be “complete” it does have to be ALL functions, not just those any sane person would want to write!)

For example, lists. You need ‘cons’ and ‘nil’ constructors, and you need the ‘fold’ operation. (Did you expect me to list ‘null’ ‘head’ and ‘tail’ as the operations on a list? Why might those be a very poor interface to lists? Beyond just, “all of them can be written using fold,” I mean.)  You don’t need anything else. Any function over a list can be written as a fold, and any way of constructing a list (e.g. append) can be written as a function that just uses the two constructors (and probably fold, too.)

Other examples include Boolean (though you need lazy evaluation, or a simulation of it, to write the ‘if’ function in its interface), Pairs, and Maybe (forget ‘isJust’ and ‘fromJust’, what is the function in its interface?).

• Come up with another example of a “complete type.”
• Compare the implementation of Maybe as both an object and a datatype.

(okay, I’m sliding into “give homework” mode here. Sorry! I love these because they let us really directly apples-to-apples compare datatypes and objects, albeit for very simple cases.)

Typeclasses. Haskell is the canonical example, but Scala may also approximate typeclasses. Type classes actually bring us back to the object “side of the fence” from datatypes, but in a slightly different way: they’re most comparable to interfaces from OO languages.

But there is a very key distinction: they’re declared separately from the “object” (I use the word object here simply to keep with the analogy to interfaces.)  So the number of type classes an “object” belongs to isn’t fixed, as it typically is for interfaces in typical OO languages. (Using the word “typical” here because it’s technically possible but ridiculously unsafe in e.g. javascript and ruby, and safely possible in “aspect-oriented” languages.)

Further, “constructor classes” allow us to make more than just types members of a type classes, and instead allow type constructors.  This is the (often unnoticed) mechanism behind type classes like Functor and Monad in Haskell.

Things to see:

• Get a feel for how powerful the two-level system of “datatypes with typeclasses on top” is.  If you came to the conclusion earlier that datatypes would make for a superior “default” way of representing things, typeclasses should start to make you wonder if objects are even needed at all!
• But… come up with a case where “typeclasses over datatypes” is insufficient/cumbersome and objects lead to a superior design.  Bonus points: do so in a way that doesn’t involve encapsulating mutation.
• Go figure out Functor, Monad, and such. The Typeclassopedia may help.

Modules. Here, you’re pretty much limited to OCaml (standard ML, too, but a lot fewer resources are out there for learning it…)

Relations. This is an important, and often overlooked, area of program design, despite the fact that it’s probably one of the most common types of programs today.  (Perhaps because there is no “relation-oriented programming language.” The closest is probably Prolog, and that’s not what I have in mind at all.)

• Take a look at relational algebra, and of course SQL. (Both, because they do take different perspectives!)
• If you haven’t looked at an ORM before, try to learn one.
• If you have, you’re probably a bit “tainted” in that you view relations through the lens of what your ORM makes easy.  Try to design a perfectly reasonable database that your favorite ORM sucks at actually using.  Bonus: go beyond obvious things like “my orm expects there to always be a single integer primary key”. Double bonus: do so without googling “why my favorite ormsucks” ;)
• Consider how you would adapt relations to datatypes instead of objects (and how to adapt datatypes to relations.)  (Perhaps look at Haskell DB libraries? I haven’t, so I’m not sure what’s there…)
• If you’ve worked on an app with a DB recently, try to think about what your “interface” to the database actually is. (I mean, to start you can look at “all queries we run” but odds are that’s not a very good interface!) (There is a “cross cutting” issue that makes this somewhat difficult: joins operate over many tables at once, so we can’t just do the obvious “give a simple interface to each table” (something ORMs frequently attempt.) But, we do actually have a similar situation with pattern matching on datatypes: you can match on multiple types at once.)
• (Suggestions for further reading welcome.)

Attribute grammars. Consider taking this one with a grain of salt, because attribute grammars are what I work on every day, so I’m biased.  But, I think they’re an often overlooked data representation technique with a lot of really cool properties (probably overlooked because the word “grammar” is somewhat inappropriately in the name, for historical reasons.)

Attribute grammars are a bit like “datatypes with typeclasses.”  Except we call our types “nonterminals”, our constructors “productions”, and our type classes (restricted to one function) “attributes.”  However, they have a number very important key differences:

• AGs are not limited in constructors (productions) and functions (attributes.) (This should cause a problem: spot it. We solve this problem in our AG system with forwarding.)
• AGs permit flowing information down the tree (inherited attributes) as well as up (synthesized attributes.)  The type class approach would be limited to up (though inherited attributes can be simulated by making what flows up a function of what should flow down.)
• AGs memoize the value of attributes during an evaluation, and demand no explicit ordering, as function calls would. (Very helpful, see: this discussion, for example.)

AGs are most frequently used to do language processing. (This is how ‘grammar’ got in the name.)  Unfortunately, there’s no one system I can point to as a good one to play with.  Our tool, Silver, is a great little stand-alone language. But UUAG (preprocesses to Haskell), JastAdd (preprocesses to Java), and Kiama (embedded in Scala) are all also good, and each has different trade-offs, which would be a whole post to itself.

Objects. By now, having gone through all the above (some months/years later!), hopefully coming back to objects makes you reconsider just what you think of as “object-oriented programming.”

Misc. Other things that have substantial effect on program design include: mutation, garbage collection, lazy evaluation, higher-order functions and closures, IO, and a whole lot more that I can’t think of at the moment.  If you actually wade through learning the above, you’ll pick up on these, but it’s worth taking the time to think about them separately.

For example: Java might force us to look at null/head/tail being the operations on lists because it’s hard to write the higher-order function for fold in Java, whereas it’s quite easy in Haskell. Lazy evaluation can allow us to construct some kinds of circular data structures, without resorting to mutation. (My favorite example of this is when constructing closures for recursive functions in an interpreter.) Cleanly separating the IO parts from the pure parts of a program, as Haskell forces you to do, is not only a general improvement to the usefulness of an API, it makes writing tests dramatically easier. Actually, the ease with which good tests can be written is probably a pretty good “code smell” indicator, too.

One last observation: We have a bunch of different things on the “object side of the fence:” objects, typeclasses, modules, attribute grammars, and there exist many more (just think about all the different kinds of object-oriented languages.)  But there’s basically just datatypes (and relations, I suppose) on the other side.  Why is that?

Well, if you go back to thinking about the “dual of datatypes” (using existential types) what you actually get isn’t actually objects. It’s something that subsumes objects, typeclasses, modules, etc. I’d personally consider it too unwieldy to work with on a regular basis, which is why we pick some nice properties we want and come up with restricted versions of it to actually let programmers play with.  (I believe that full power is actually available with “first class modules,” but I’m not 100% certain on that.)

Posted in Uncategorized | | 4 Comments