Calculus: Linear Approximations, I

Last week’s post on the Geometry of Polynomials generated a lot of interest from folks who are interested in or teach calculus.  So I thought I’d start a thread about other ideas related to teaching calculus.

This idea is certainly not new.  But I think it is sorely underexploited in the calculus classroom.  I like it because it reinforces the idea of derivative as linear approximation.

The main idea is to rewrite

\displaystyle\lim_{h\to 0}\dfrac{f(x+h)-f(x)}h=f'(x)


f(x+h)\approx f(x)+hf'(x),

with the note that this approximation is valid when h\approx0.  Writing the limit in this way, we see that f(x+h), as a function of h, is linear in h in the sense of the limit in the definition actually existing — meaning there is a good linear approximation to f at x.

Moreover, in this sense, if

f(x+h)\approx f(x)+hg(x),

then it must be the case that f'(x)=g(x).  This is not difficult to prove.

Let’s look at a simple example, like finding the derivative of f(x)=x^2.  It’s easy to see that


So it’s easy to read off the derivative: ignore higher-order terms in h, and then look at the coefficient of h as a function of x.

Note that this is perfectly rigorous.  It should be clear that ignoring higher-order terms in h is fine since when taking the limit as in the definition, only one h divides out, meaning those terms contribute 0 to the limit.  So the coefficient of h will be the only term to survive the limit process.

Also note that this is nothing more than a rearrangement of the algebra necessary to compute the derivative using the usual definition.  I just find it is more intuitive, and less cumbersome notationally.  But every step taken can be justified rigorously.

Moreover, this method is the one commonly used in more advanced mathematics, where  functions take vectors as input.  So if

f({\bf v})={\bf v}\cdot{\bf v},

we compute

f({\bf u}+h{\bf v})={\bf u}\cdot{\bf u}+2h{\bf u}\cdot{\bf v}+h^2{\bf v}\cdot{\bf v},

and read off

\nabla_{\bf v}f({\bf u})=2{\bf u}\cdot{\bf v}.

I don’t want to go into more details here, since such calculations don’t occur in beginning calculus courses.  I just want to point out that this way of computing derivatives is in fact a natural one, but one which you don’t usually encounter until graduate-level courses.

Let’s take a look at another example:  the derivative of f(x)=\sin(x), and see how it looks using this rewrite.  We first write


Now replace all functions of h with their linear approximations.  Since \cos(h)\approx1 and \sin(h)\approx h near h=0, we have


This immediately gives that \cos(x) is the derivative of \sin(x).

Now the approximation \cos(h)\approx1 is easy to justify geometrically by looking at the graph of \cos(x).  But how do we justify the approximation \sin(h)\approx h?

Of course there is no getting around this.  The limit


is the one difficult calculation in computing the derivative of \sin(x).  So then you’ve got to provide your favorite proof of this limit, and then move on.  But this approximation helps to illustrate the essential point:  the differentiability of \sin(x) at x=0 does, in a real sense, imply the differentiability of \sin(x) everywhere else.

So computing derivatives in this way doesn’t save any of the hard work, but I think it makes the work a bit more transparent.  And as we continually replace functions of h with their linear approximations, this aspect of the derivative is regularly being emphasized.

How would we use this technique to differentiate f(x)=\sqrt x?  We need

\sqrt{x+h}\approx\sqrt x+hf'(x),

and so

x+h\approx \left(\sqrt x+hf'(x)\right)^2\approx x+2h\sqrt xf'(x).

Since the coefficient of h on the left is 1, so must be the coefficient on the right, so that

2\sqrt xf'(x)=1.

As a last example for this week, consider taking the derivative of f(x)=\tan(x).  Then we have


Now since \sin(h)\approx h and \cos(h)\approx 1, we have \tan(h)\approx h, and so we can replace to get


Now what do we do?  Since we’re considering h near 0, then h\tan(x) is small (as small as we like), and so we can consider


as the sum of the infinite geometric series


Replacing, with the linear approximation to this sum, we get


and so


This give the derivative of \tan(x) to be



Now this method takes a bit more work than just using the quotient rule (as usually done).  But using the quotient rule is a purely mechanical process; this way, we are constantly thinking, “How do I replace this expression with a good linear approximation?”  Perhaps more is learned this way?

There are more interesting examples using this geometric series idea.  We’ll look at a few more next time, and then use this idea to prove the product, quotient, and chain rules.  Until then!