I'm taking a numerical analysis class (using c++) and one of the assignments I have is to implement the secant method for a simple polynomial (x-x^(1/3) -2).

Using this recursive method xk+1 = [f(xk)*(xk-xk-1)]/[f(xk)-f(xk-1)]

Simple enough, but part b of this question requires that a number Em (being defined as the smallest machine number available) be added to the recursive function s.t.:

xk+1 = [f(xk)*(xk-xk-1)]/[f(xk)-f(xk-1) +Em]

Predictably this has no effect on the reported results (I'm using the smallest double available in this case) -- but the question insinuates that there is some purpose to doing this, and I really don't know what it is.

The only thing I could think of is the possibility of some function provoking an oscilation near the root value because of acumulated FP error. I thought maybe adding this Em to the denominator might alleviate this problem.

What do you guys think?

x - x^(1/3) - 2 is not a polynomial.

Adding Em would ward off divide by zero errors. It's much more likely for a function to have two floating point numbers where it evaluates to the same value than to have a function that has two points where the difference is Em. The only place where the difference of floating point representations could evaluate to Em (or negative Em), for continuous functions, is about the origin, if the function intersects the origin, unless the function has a zero where the first non-zero derivative is absuuuuurdly infinitesimal. (In those situations usually you'd be using different units to scale your function up, anyway.) You could also get the value +/- Em for laaaarge values of x, if lim (x->infinity) f(x) = 0.

For example, if you're using doubles to represent your numbers, and you have two numbers 2 <= x1 <= x2 <= 3, then evaluating (x2-x1) will produce one of the discrete values 0, 2^-52, 2*2^-52, 3*2^-52, 4*2^-52, 5*2^-52, ... (Maybe the exponent is -53 or -51; I don't care to check.) If it's 0, then adding Em makes the denominator nonzero, and if it's nonzero, then adding Em doesn't change the denominator.

Thus adding Em in your case (where your root is not zero, and where you don't have two values xi, x{i+1} such that f(xi) = f(x{i+1})) has exactly no impact on convergence, and no impact on any of your xn values.

Perfect. Thank you for a clear and well reasoned explanation.

I'm taking a numerical analysis class (using c++) and one of the assignments I have is to implement the secant method for a simple polynomial (x-x^(1/3) -2).

Using this recursive method xk+1 = [f(xk)*(xk-xk-1)]/[f(xk)-f(xk-1)]

Simple enough, but part b of this question requires that a number Em (being defined as the smallest machine number available) be added to the recursive function s.t.:

xk+1 = [f(xk)*(xk-xk-1)]/[f(xk)-f(xk-1) +Em]

Predictably this has no effect on the reported results (I'm using the smallest double available in this case) -- but the question insinuates that there is some purpose to doing this, and I really don't know what it is.

The only thing I could think of is the possibility of some function provoking an oscilation near the root value because of acumulated FP error. I thought maybe adding this Em to the denominator might alleviate this problem.

What do you guys think?

The smallest machine number available is epsilon. Try running this chunk of code and see if it helps. You will need to modify it for your program and place it somewhere of your choice to make it work. I need to do the secant program right now myself. An algorithm for that would be helpful. Let me know if you have something helpful. I hope this helps.
Jkettel81

//this will define epsilon as the //smallest defined machine number.

#include <iostream> using namespace std;

//declare variables double epsilon; //epsilon double epsilonp1; //epsilon+1 int cnt; //shows number of passes //assign values epsilon=1.0; epsilonp1=epsilon+1.0; cnt=0; //begin loop while(epsilonp1>1) {cnt=cnt+1; epsilon=0.5*epsilon; epsilonp1=epsilon+1.0; cout<<pass<<" "<<epsilon<<" "<<endl; } //these 2 show the difference between math and PCs cout<<1+epsilon+epsilon+epsilon+epsilon+epsilon+epsilon<<endl; cout<<1+(epsilon+epsilon+epsilon+epsilon+epsilon+epsilon)<<endl; cout<<"the final epsilon is"<<epsilon<<endl;[code=C++]
//this will define epsilon as the
//smallest defined machine number.

#include <iostream>
using namespace std;

//declare variables
double epsilon; //epsilon
double epsilonp1; //epsilon+1
int cnt; //shows number of passes

//assign values
epsilon=1.0;
epsilonp1=epsilon+1.0;
cnt=0;
//begin loop
while(epsilonp1>1)
{cnt=cnt+1;
epsilon=0.5*epsilon;
epsilonp1=epsilon+1.0;
cout<<pass<<" "<<epsilon<<" "<<endl;
}
//these 2 show the difference between math and PCs
cout<<1+epsilon+epsilon+epsilon+epsilon+epsilon+epsilon<<endl;
cout<<1+(epsilon+epsilon+epsilon+epsilon+epsilon+epsilon)<<endl;
cout<<"the final epsilon is"<<epsilon<<endl;

In C++ you should have access to the language constants DBL_MIN and DBL_EPSILON.

DBL_MIN is the smallest normalized number representable by the floating point type you are using. DBL_EPSILON is the smallest number such that (1.0 + DBL_EPSILON) > 1.0

Relating to root-finding, DBL_EPSILON is the lower limit of precision to which you could seek a zero. If you tried to find a zero with precision 1e-10 on a computer whose DBL_EPSILON was only 1.0e-5, you'd be wasting your time; a precision of 1.0e-5 would be the best you could do.

It's even harder when you realize that some functions like sqrt aren't even monotonic in some standard libraries.