I am investigating the equations which say that the error in the trapezium rule is proportional to (b-a)^3, where a and b are the lower and upper limits respectively, and that the error in the simpson's rule is proportional to (b-a)^5.

(First equation is for trapezium, second equation is for simpson)(Taken from Wikipedia)

I am considering the integral of cos(x) between a=0 and b.

I compute the corresponding error for different (b-a) values, keeping the lower limit 'a' constant and the number of strips 'n' constant. Then I use these values to plot a graph of log(error) vs log(b-a), which should produce a straight line graph of gradient 3 for the trapezium rule and of gradient 5 for the simpson's rule. (See image below)


As you can see, the graphs behave as expected (the gradients are indeed 3 and 5), but only up to b≈2, and then they become a mess. Does this mean there is something wrong with my code, or is this related to the precision limit of a floating point? Either way, I don't really understand why the graphs only behave as expected for such a small range.

Any help would be gratefully appreciated. Thanks.

// Finding the relationship between error and range of limits
using namespace std;
double integrand(double);
double trapezium(int, double, double);
double simpson(int, double, double);
int main()
	ofstream outfile("cosba.xls");
	double a=0, b, Itrap, Isimp, errortrap, errorsimp;
	const int n=10;
	return 0;

double integrand(double x)
	return cos(x);

double trapezium(int n, double a, double b)
	double x=a, sumpar=0.0, totalsumpar, delta=(b-a)/n;
	for(int i=1;i<=n-1;i++)
	return totalsumpar;

double simpson(int n, double a, double b)
	double x, sumpar1=0.0, sumpar2=0.0, totalsumpar, delta=(b-a)/n;
	for(int i=1;i<=n-1;i+=2)
	for(int i=2;i<=n-2;i+=2)
	return totalsumpar;

You have to understand that these expected errors are upper-bounds. I would think the messy curves you see (which is called "variance" btw) mostly have to do with the particular function used. For example, when the range includes a zero-crossing, then the way that the intervals (the small sub-divided intervals) of integration fall around the zero-crossing could have a big effect on the actual error. That would explain the "periodic" error curves you get.

I would suggest that you "sweep" the range of integration and take the maximum error. In other words, say that c = (b-a), then for a value of c (interval size), then pick M values of "a" within the period of the curve (maybe 0 to Pi or something.. maybe even picked at random) and perform the integration. Then, pick the maximum error obtained (from all the a values tried) for a given interval size c. This should get rid of this mess on the graph by eliminating the "chance" factor about where sub-intervals fall. And remember that the important quantity is not the actual curves you plot, but the line that hugs the upper-bound of your error the best.

As for round-off errors and floating-point precision, it might contribute to some small deviation from the expected results, but not that much.

This article has been dead for over six months. Start a new discussion instead.