#include <cmath>

long double NR(long double sample)
{
	return (abs(f(sample)))<=1e-10 ? sample: NR( (sample - f(sample)/fp(sample)));
}

long double f(long double sample)
{
	return 2.5*exp(-sample)-3*sin(sample);
}

long double fp(long double sample)
{
	return -2.5*exp(-sample)-3*cos(sample);
}

I'm trying to write code for a Newton-Raphson algorithm. I have it in recursive form but since it finds roots in estimations f(x) doesn't really equate to zero, so instead I want it to stop calculation at a certain precision.

Can anyone help me to find a way to do this? I wish to implement something more general so as to be able to handle any f(x), that is provision for if f(x) has an asymptote at 0. At the moment I have a quick fix to check if f(x) is sufficiently small so as to be considered a root.

In future I will implement code to handle polynomials, but at the moment the functions are hardcoded. But that is another story.

P.S.
If I let it go and allow it to calculate until "f(x) == 0" forcefully overstack flow error occurs.

See http://www.nrbook.com/a/bookcpdf.php section 9.4 (you need some stupid plug-in in addition to Acrobat Reader but if you're in this line of inquiry this book is gospel). It does not provide a clean answer to this but does have a calculation for how much the error changes at each step. It's just a matter of how precise your answer needs to be. Data is usually available for the tolerance of doubles and long doubles on a given compiler/machine. Sometimes the best solution is to test out your implementation with a poly that you already know the answer to and see how quickly it converges. Hope that helps a bit.

Thank you jonsca, I eventually got the solution with some research. I didn't, rather, couldn't, use your resource though. I couldn't open it. :(

Thank you. :D

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.