I'm trying to understand the following algorithm, found at the 26th page from the first volume, 3rd edition of donald knuth's art of computer programming. it is the 25th exercise on the page.
I am using <- as the attribution operator.
It says: suppose that we have a binary computer and a number x, 1<=x<2. Show that the following algorithm, which uses only shifting, addition, and substraction operations proportional to the number of places of accuracy desired, may be used to calculate an approximation to y = \log _b \left( x \right):
L1. [Initialize.] Set y<- 0, z<- x shifted right 1, k<- 1.
L2. [Test for end.] If x=1, stop.
L3. [Compare.] If x-z<1, set z<-z shifted right 1, k<- k+1, and repeat this step.
L4. [Reduce values.] Set x<- x-z, z<- x shifted right k, y<- y + \log _b \left( 2^k/(2^k-1) \right) and go to L2.
I didn't manage to understand the algorithm, as I do not know how to apply it.
I have tried in python and c and the bitwise right operator does not work for floats. I have tried using instead division by 2 in a c program, for base e as follows:
#include<stdio.h>
#include<conio.h>
#include<math.h>
void main()
{
float x,y,z,k=1,save;
save=x=1.7;
y=0;
z=x/2;
k=1;
while(x!=1){
while(x-z<1){z=z/2;k++;}
x=x-z;z=z/2;y+=log(pow(2,k)/(pow(2,k)-1));
}
printf("%f\n%f\n",y,log(save));
getch();
}
As you might see, the results are really different: 0.784626 for the algorithm and 0.530628 for the log function, so something is not right.
Please help me understand this algorithm. Thank you.