Well, just do the impossible!

One of the slightly bizarre aspects of computer science is that we do the “impossible” all the time.

One way this happens is that, while SAT is famously NP-complete, and we often regard NP-complete problems to be “intractable,” it turns out that in practice, most problems we want to solve can be done quite efficiently!  (And not even just by approximation algorithms, exact algorithms, too.)

If you’re waving this off and mutter something about “well, maybe P = NP” right now, well, how about this one: We often go ahead and just “solve” undecidable problems, too.

Languages like Coq and Agda won’t let you write any programs that aren’t provably terminating.  They use a really simple method of attempting to automatically prove this, and it turns out that it’s not usually a problem!  (Pretty much just a simple structural recursion check.)  Occasionally, this isn’t enough, but even then, usually those functions do have a termination proof, just not one that fits into the really simple method Coq does automatically.

Aside: The way you get around this is actually kind of interesting.  You add an extra “fuel” parameter to the function in question that decrements every recursive call; then, you turn the proof that this function (in its fuel-less form) terminates into a function that computes an upper bound on the amount of fuel needed to solve that instance.

Once that’s done, Coq happily proves to itself the function terminates (thanks to the fuel decreasing) while you happily prove to yourself that you get the result you want, thanks to supplying enough fuel. Magic!

What brought this on today was this post (via Reddit) claiming that Rice’s theorem means dynamic typing will always be preferable to static typing. (There are many things to respond to there, but I want to only comment on this one:)

Rice’s theorem shows us that it is impossible to determine for sure whether a given program has type errors.

True for arbitrary programs, but do humans ever want to write one of those programs?

We don’t have a language yet with such a rich type system that it never gets in the way, but Rice’s theorem doesn’t really tell us that such a thing is actually impossible.  And further, it’s questionable whether we even want the type system to never get in the way!  (Just think of any horrifying yet technically correct code you’ve ever witnessed, and dream of a world where the compiler would slap whoever wrote that.)

A question to any hypothetical readers: I know abstract interpretation of dynamic languages like JavaScript, Python or PHP has been rather successful in compiling them efficiently, does anyone know of a good resource describing what difficulties (language features, use patterns) they run into in getting good results (ignoring obvious abominations like eval, and such?)

Advertisements
This entry was posted in Compilers. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s