I'm trying to understand the following algorithm, found at the 26th page from the first volume, 3rd edition of donald knuth's art of computer programming. it is the 25th exercise on the page.

I am using <- as the attribution operator.

It says: suppose that we have a binary computer and a number x, 1<=x<2. Show that the following algorithm, which uses only shifting, addition, and substraction operations proportional to the number of places of accuracy desired, may be used to calculate an approximation to y = \log _b \left( x \right):

L1. [Initialize.] Set y<- 0, z<- x shifted right 1, k<- 1.
L2. [Test for end.] If x=1, stop.
L3. [Compare.] If x-z<1, set z<-z shifted right 1, k<- k+1, and repeat this step.
L4. [Reduce values.] Set x<- x-z, z<- x shifted right k, y<- y + \log _b \left( 2^k/(2^k-1) \right) and go to L2.

I didn't manage to understand the algorithm, as I do not know how to apply it.
I have tried in python and c and the bitwise right operator does not work for floats. I have tried using instead division by 2 in a c program, for base e as follows:

void main()
	float x,y,z,k=1,save;

As you might see, the results are really different: 0.784626 for the algorithm and 0.530628 for the log function, so something is not right.

Please help me understand this algorithm. Thank you.

I interpret this so that there is integer which is the fraction part of number and the whole part is 1 ( 1<=x<2 ). Of course if shift to right happens that highest bit should by shifted in so shift one or highest bit set and shift rest for the operation.

Or can be that highest bit is 1 and rest of number is fraction, in case of 32 bits you would divide by 1<<31 to get the value. In Python:

''' Input x is 1<=x<2 -> number is 1+ int/maxint '''
bin_num=int(float_num* (1<<32))
print float_num,bin(bin_num)
1.5 0b110000000000000000000000000000000