As I mentioned last week, I am a fan of emphasizing the idea of a derivative as a linear approximation. I ended that discussion by using this method to find the derivative of Today, we’ll look at some more examples, and then derive the product, quotient and chain rules.
Differentiating is particularly nice using this method. We first approximate
Then we factor out a from the denominator, giving
As we did at the end of last week’s post, we can make as small as we like, and so approximate by considering
as the sum of an infinite series:
Finally, we have
which gives the derivative of as
We’ll look at one more example involving approximating with geometric series before moving on to the product, quotient, and chain rules. Consider differentiating We first factor the denominator:
Now approximate
so that, to first order,
This finally results in
giving us the correct derivative.
Now let’s move on to the product rule:
Here, and for the rest of this discussion, we assume that all functions have the necessary differentiability.
We want to approximate so we replace each factor with its linear approximation:
Now expand and keep only the first-order terms:
And there’s the product rule — just read off the coefficient of
There is a compelling reason to use this method. The traditional proof begins by evaluating
The next step? Just add and subtract (or perhaps
). I have found that there is just no way to convincingly motivate this step. Yes, those of us who have seen it crop up in various forms know to try such tricks, but the typical first-time student of calculus is mystified by that mysterious step. Using linear approximations, there is absolutely no mystery at all.
The quotient rule is next:
First approximate
Now since is small, we approximate
so that
Multiplying out and keeping just the first-order terms results in
Voila! The quotient rule. Now usual proofs involve (1) using the product rule with and
but note that this involves using the chain rule to differentiate
or (2) the mysterious “adding and subtracting the same expression” in the numerator. Using linear approximations avoids both.
The chain rule is almost ridiculously easy to prove using linear approximations. Begin by approximating
Note that we’re replacing the argument to a function with its linear approximation, but since we assume that is differentiable, it is also continuous, so this poses no real problem. Yes, perhaps there is a little hand-waving here, but in my opinion, no rigor is really lost.
Since is differentiable, then
exists, and so we can make
as small as we like, so the “
” term acts like the “
” term in our linear approximation. Additionally, the “
” term acts like the “
” term, resulting in
Reading off the coefficient of gives the chain rule:
So I’ve said my piece. By this time, you’re either convinced that using linear approximations is a good idea, or you’re not. But I think these methods reflect more accurately the intuition behind the calculations — and reflect what mathematicians do in practice.
In addition, using linear approximations involves more than just mechanically applying formulas. If all you ever do is apply the product, quotient, and chain rules, it’s just mechanics. Using linear approximations requires a bit more understanding of what’s really going on underneath the hood, as it were.
If you find more neat examples of differentiation using this method, please comment! I know I’d be interested, and I’m sure others would as well.
In my next installment (or two or three) in this calculus series, I’ll talk about one of my favorite topics — hyperbolic trigonometry.