I am investigating the equations which say that the error in the trapezium rule is proportional to (b-a)^3, where a and b are the lower and upper limits respectively, and that the error in the simpson's rule is proportional to (b-a)^5.

[img]http://img809.imageshack.us/i/eqns.png/[/img]
(First equation is for trapezium, second equation is for simpson)(Taken from Wikipedia)

I am considering the integral of cos(x) between a=0 and b.

I compute the corresponding error for different (b-a) values, keeping the lower limit 'a' constant and the number of strips 'n' constant. Then I use these values to plot a graph of log(error) vs log(b-a), which should produce a straight line graph of gradient 3 for the trapezium rule and of gradient 5 for the simpson's rule. (See image below)

[img]http://img695.imageshack.us/i/graphqm.png/[/img]

As you can see, the graphs behave as expected (the gradients are indeed 3 and 5), but only up to b≈2, and then they become a mess. Does this mean there is something wrong with my code, or is this related to the precision limit of a floating point? Either way, I don't really understand why the graphs only behave as expected for such a small range.

Any help would be gratefully appreciated. Thanks.

// Finding the relationship between error and range of limits
#include<cmath>
#include<fstream>
using namespace std;
double integrand(double);
double trapezium(int, double, double);
double simpson(int, double, double);
int main()
{
	ofstream outfile("cosba.xls");
	double a=0, b, Itrap, Isimp, errortrap, errorsimp;
	const int n=10;
	for(b=0.1;b<100.0;b+=0.1)
	{
		Itrap=trapezium(n,a,b);
		Isimp=simpson(n,a,b);
		errortrap=Itrap-(sin(b)-sin(a));
		errorsimp=Isimp-(sin(b)-sin(a));
		outfile<<log((b-a))<<"\t"<<log(abs(errortrap))<<"\t"<<log(abs(errorsimp))<<endl;
	}
	return 0;
}

double integrand(double x)
{
	return cos(x);
}

double trapezium(int n, double a, double b)
{
	double x=a, sumpar=0.0, totalsumpar, delta=(b-a)/n;
	for(int i=1;i<=n-1;i++)
	{
		x+=delta;
		sumpar+=integrand(x);
	}
	totalsumpar=(delta*0.5)*((2.0*(sumpar)+integrand(a)+integrand(b)));
	return totalsumpar;
}

double simpson(int n, double a, double b)
{
	double x, sumpar1=0.0, sumpar2=0.0, totalsumpar, delta=(b-a)/n;
	
	x=a-delta;
	for(int i=1;i<=n-1;i+=2)
	{
		x+=2.0*delta;
		sumpar1+=integrand(x);
	}
	
	x=a;
	for(int i=2;i<=n-2;i+=2)
	{
		x+=2.0*delta;
		sumpar2+=integrand(x);
	}
	
	totalsumpar=(delta/3.0)*((4.0*sumpar1)+(2.0*sumpar2)+integrand(a)+integrand(b));
	return totalsumpar;
}

You have to understand that these expected errors are upper-bounds. I would think the messy curves you see (which is called "variance" btw) mostly have to do with the particular function used. For example, when the range includes a zero-crossing, then the way that the intervals (the small sub-divided intervals) of integration fall around the zero-crossing could have a big effect on the actual error. That would explain the "periodic" error curves you get.

I would suggest that you "sweep" the range of integration and take the maximum error. In other words, say that c = (b-a), then for a value of c (interval size), then pick M values of "a" within the period of the curve (maybe 0 to Pi or something.. maybe even picked at random) and perform the integration. Then, pick the maximum error obtained (from all the a values tried) for a given interval size c. This should get rid of this mess on the graph by eliminating the "chance" factor about where sub-intervals fall. And remember that the important quantity is not the actual curves you plot, but the line that hugs the upper-bound of your error the best.

As for round-off errors and floating-point precision, it might contribute to some small deviation from the expected results, but not that much.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.