There is one essential theorem of vector calculus that is essential to the development of multipoles - computing the dipole moment. Jackson blithely integrates by parts (for a charge/current density with compact support) thusly:

(2.1) |

Then, using the continuity equation and the fact that and
are presumed harmonic with time dependenct
, we substitute
to obtain:

(2.2) |

However, this leaves a nasty question: Just *how* does this
integration by parts work? Where does the first equation come from?
After all, we can't rely on always being able to look up a result like
this, we have to be able to derive it and hence learn a method we can
use when we have to do the same thing for a different functional form.

We might guess that deriving it will use the divergence theorem (or Green's theorem(s), if you like), but any naive attempt to make it do so will lead to pain and suffering. Let's see how it goes in this particularly nasty (and yet quite simple) case.

Recall that the idea behind integration by parts is to form the
derivative of a product, distribute the derivative, integrate, and
rearrange:

(2.3) |

where if the products (as will often be the case when and and have compact support) the process ``throws the derivative from one function over to the other'':

(2.4) |

The exact same idea holds for vector calculus, except that the idea is
to use the *divergence theorem* to form a *surface integral*
instead of a boundary term. Recall that there are many forms of the
divergence theorem, but they all map
to
in the
following integral form:

(2.5) |

(2.6) |

(2.7) |

To prove Jackson's expression we might therefore try to find a suitable
product whose divergence contains
as one term. This isn't too
easy, however. The problem is finding the right tensor form. Let us
look at the following divergence:

(2.8) |

This looks promising; it is the -component of a result we might use. However, if try to apply this to a matrix dyadic form in what

(2.9) |

we get the

To assemble the *right* answer, we have to sum over the three
separate statements:

or

(2.10) |

(2.11) | |||

(2.12) | |||

(2.13) | |||

(2.14) |

where we have used the fact that (and ) have

We rearrange this and get:

(2.15) |

This illustrates one of the most difficult examples of using integration
by parts in vector calculus. In general, seek out a tensor form that
can be expressed as a pure vector derivative and that evaluates to two
terms, one of which is the term you wish to integrate (but can't) and
the other the term you want *could* integrate if you could only
proceed as above. Apply the generalized divergence theorem, throw out
the boundary term (or not - if one keeps it one derives e.g. Green's
Theorem(s), which are nothing more than integration by parts in this
manner) and rearrange, and you're off to the races.

Note well that the tensor forms may *not be trivial!* Sometimes you
do have to work a bit to find just the right combination to do the job.