## Teaching math

I want to repost Robert Talbert’s repost of Conrad Wolfram’s (note: not Stephen Wolfram, but his brother) TED talk on mathematics education.  It’s the usual rant about pre-college-level mathematics education, but I wanted to add something to it.

The way I see it, there are three levels at which you can try to teach mathematics–and more for that matter. (The subject of proofs, by the way, is orthogonal to these levels. You can do that with any of these:)

1. Calculation. This is pretty much how it’s done, currently.  See: every calculus textbook that gives  you ten thousand variants on “find the derivative of $x^2 + \cos(x)$“.
2. Use.  People often like to make the distinction between “exercises” which are just calculation, and “problems” which require a higher level understanding.  At this level, we try to get students to take a real problem, and solve it using the tools and techniques they’ve been taught.
3. Implementation. This, I think, it too often ignored, and I’ll be elaborating on it below, but from a computer scientist’s perspective it’s an obvious level to consider. Without computers, though, I don’t think this level is possible to teach, which is perhaps why it hasn’t been discussed enough.

Imagine, for a moment, we were talking about databases.  It’s obvious that one thing we’d teach is how to run queries and how construct schemas (2).  We’d also want to teach how to build a DBMS (3).  But, nobody would be arguing in favor of forcing students to run queries repeatedly, by hand, on paper, over tables of data over and over again (1).  That would be stupid.  But, that’s the state of mathematics education today.

Back in the day, this debate often involved calculators: how much should kids be allowed to use them?  One side decried their use: “They’ll learn nothing!”  The other side sneered: “Why make them learn what a mere machine can do?”

Nobody suggested teaching kids to build a calculator. Or at least, nobody heard them.

When I was teaching polymorphism to my C++ students last year, one example I used was a program that did symbolic differentiation.  The couple of students who were paying attention seemed to be a bit stunned.  What was it? A couple of things.

First, code like the following (in Haskell, because I hate C++):

diff (FX) = FConst 1
diff (f :+: g) = (diff f) :+: (diff g)
diff (f :*: g) = ((diff f) :*: g) :+: ((diff g) :*: f)

seemed miraculous and entirely unlike anything they’d thought they’d known about calculus.  This code should be obvious.  You learn these rules right away, but the fact that we could simply encode this knowledge directly in a program often strikes people like it’s artificial intelligence or something.  But no, derivatives are just computation, not complex thought.

Second, representation was entirely novel.  Given something like $f(x) = (x + 2)^2$, nobody could come up with a way of encoding this function without simplifying it first.  Using function composition (composing $x^2$ and $x + 2$), despite the chain rule being one of the most important derivative rules, didn’t seem to strike them as a possibility.  Changing the problem to $f(x) = \sin(x+2)$  triggered that reflex memory from calculus and they knew what to do (with some prodding.)

Changing the problem slightly from a pre-canned formula and getting no response is a remarkable failure of education.  We taught them the stuff a computer can do, and they’ll probably forget it eventually.  Meanwhile, the stuff a human should be able to do (like represent a formula to their programs!), they don’t understand.