Integration Notes Latex Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 5

Notes on error analysis of numerical integration
For your homework, you’ll be doing error analysis in the simplest way possible. I’ve asked you to integrate
a function that you can integrate with pen and paper, so to determine the dependence of the error Eon the
stepsize hall you need to do is to compute (numeric answer - analytic answer) for each value of h.
The Taylor series business is a way to predict what these errors will be before you test them, and to determine
how good an algorithm will be. You don’t necessarily need to reproduce this, but you should understand it.
1 The approach
We want to come up with a formula for each of our three methods that relates the error to the stepsize
for a single bin. Each of the integration methods approximates the shape of the function as some shape (a
rectangle or a trapezoid); we will write down the functional form of that shape and integrate it, and then
subtract the “actual” integral of the function.
If we are doing this for arbitrary functions, we don’t know the actual integral. This is where the Taylor series
comes in: we can write a Taylor series that expands the function about the left-hand side of the Riemann
sum bin (which I shall call x=0). This is convenient, since we already have a “small quantity” in mind: the
bin width h. We will then get a polynomial in hfor both the exact integral and the approximation, which
will make it very easy to compare terms and figure out what power of happears in the leading term in the
error. Why do we care about the power? Simply, because if his small enough, the power of houtweighs any
coefficient in front.
2 Integrating the exact function
Call the function we’d like to integrate f(x); let’s integrate it over a single bin. Its Taylor series up to second
order is
f(x) = f(0) + xf0(0) + 1
2x2f00 (0) + O(x3) (1)
Its integral is then
Zh
0
f(x)dx =Zh
0f(0) + xf0(0) + 1
2x2f00 (0) + O(x3)dx (2)
=xf(0) + 1
2x2f0(0) + 1
2
1
3x3f00 (0) + O(x4)
h
0
(3)
=hf(0) + 1
2h2f0(0) + 1
6h3f00 (0) + O(h4) (4)
This last equation is, then, the integral of f(x) over one bin. We will be subtracting the results we get from
integrating the approximations from it to see how they compare.
1
3 The left-hand rule
The left-hand rule approximates the function as a horizontal line of height equal to the function’s value at
the left side of the interval:
fLHR(x) = f(0) (5)
This is trivially easy to integrate:
Zh
0
fLHR(x)dx =Zh
0
f(0) dx (6)
=xf(0)|h
0(7)
=hf(0) (8)
Subtracting this from the exact result, we see we’ve gotten the first term right, but all the rest are wrong.
Since we only really care about the leading-order error term, we can write the error as
ELHR =1
2h2f(0) + O(h3).(9)
Thus we see that the error made by the left-hand rule is of order h2. This is the error per bin; if we want
the error for the whole integral, we have to multiply by the number of bins Nb=ba
h. The power of hin the
denominator cancels one of the powers of hin the error per bin. This makes sense: if you cut the bin width
in half, you require twice as many bins, so you will get one-half, not one-fourth, the overall error. Thus, we
conclude:
The left-hand rule is a first-order algorithm: the error in the integral is proportional to h.
4 The midpoint rule
The midpoint rule likewise approximates the function as a horizontal line, but this time one evaluated at
the midpoint of the interval. In order to know how high this line is, we have to evaluate the function at its
midpoint. We could write f(h/2), but this is unhelpful, since everything else in our analysis involves the
evaluation of fand its derivatives at the left-hand side of the interval. So we instead write f(h/2) in terms
of the Taylor series:
fmid(x) = f(0) + 1
2hf0(0) + 1
2(1
2)2h2f00 (0) (10)
As with the Left-Hand Rule, we can trivially integrate this by multiplying by h, since it’s just a rectangle
with the height given above and width h. This gives
2
Zh
0
fmid(x)dx =Zh
0f(0) + 1
2hf0(0) + 1
8h2f00 (0)dx (11)
=hf(0) + 1
2h2f0(0) + 1
8h3f00 (0) (12)
Note here that the first two terms match the exact integral form (Eq. 4). The approximation made by the
midpoint rule correctly “captures” both the height of the function at the left-hand side and its slope – the
first two terms of the full Taylor series, not just the first one. The error is thus
Emid =1
24h3f00 (0) + O(h4).(13)
(The 1/24 comes from subtracting 1/8 from 1/6.) Since the leading term in the “error per bin” formula is
O(h3), the logic above says that the overall error in the integral is proportional to h2.
Thus:
The midpoint rule is a second-order algorithm: the error in the integral is proportional to h2.
Note the significance of this. In the left-hand rule, if we cut the stepsize by a factor of ten, we
improve the accuracy by roughly a factor of ten (plus higher order corrections), since it’s a
first-order algorithm. However, in the midpoint rule, if we cut the stepsize by a factor of ten,
we improve the accuracy by a factor of a hundred. This is why second-order algorithms are so
much better!
5 The trapezoid rule
The trapezoid rule approximates the function as a sloped line whose y-intercept is f(0) and whose slope is
(f(h)f(0))/h. As with the midpoint rule, we write f(h) in terms of the Taylor expansion about f(0). So
the slope and intercept of the line are:
m=f(0) + hf0(0) + 1
2h2f00 (0)f(0)
h=f0(0) + 1
2hf00 (0) (14)
b=f(0) (15)
giving us a slope-intercept equation of
ftrap =f0(0)x+1
2hxf00 (0) (16)
which we can integrate to get
3
Zh
0
ftrap(x)dx =Zh
0f0(0)x+1
2hxf00 (0)dx (17)
=hf(0) + 1
2h2f0(0) + 1
4h3f00 (0) (18)
giving an error of
Etrap =1
12h3f00 (0) + O(h4).(19)
(As before, 1/12 comes from subtracting 1/4 in the trapezoid approximation formula from 1/6 in the “correct”
answer in Eq. 4.) As with the Midpoint Rule, we have gotten the first two terms right; the error is of order
h3for one bin, or order h2for the whole integral. Thus:
The trapezoid rule is also a second-order algorithm: the error in the integral is proportional to h2.
6 Summary
We have four formulas, one for the exact integral, and the three approximations:
Zh
0
fexact(x)dx =hf (0) + 1
2h2f0(0) + 1
6h3f00 (0) + O(h4)Zh
0
fLHR(x)dx =hf (0) (20)
Zh
0
fmid(x)dx =hf (0) + 1
2h2f0(0) + 1
8h3f00 (0) (21)
Zh
0
ftrap(x)dx =hf (0) + 1
2h2f0(0) + 1
4h3f00 (0) (22)
The left-hand rule is a first-order algorithm; the other two are second-order algorithms.
7 Simpson’s rule
You might note here that we have two second-order algorithms that have different incorrect values for the
O(h3) term. By taking a weighted average of the two of these, we can make the O(h3) term come out right
as well. This approach is called Simpson’s rule.
fSimpson(x) = 2
3fmid(x) + 1
3ftrap(x) (23)
Then the integral is
4
Zh
0
fSimpson(x)dx =hf (0) + 1
2h2f0(0) + 1
6h3f00 (0) (24)
which matches what we were “supposed to get”, except for errors of order O(h4) that we haven’t calculated.
(To do this calculation, you’ll need to carry all the Taylor series involved out to one more order. Anyone
who does this calculation and can explain it to me will get substantial extra credit.)
So what is the coefficient of the O(h4) error term in Simpson’s rule? When you do this – carry all the Taylor
series out to one more order – you’ll see that by fortunate coincidence the O(h4) error is zero, as well; the
leading order in Simpson’s rule is actually of order O(h5)! This means that:
Simpson’s rule is a fourth-order integration algorithm.
In our class, the midpoint or trapezoid rules will be enough. In principle, you could devise ever more
complex integration algorithms that are fifth, sixth, etc., order, but quite rapidly you wind up exhausting
the numerical precision of your computing device; computers are fast enough that Simpson’s rule is generally
fine for any integral you’d want to do with Riemann summation.
5

Navigation menu