i know how numerical values are represented in memory. like +3, -3, 0, 1 , 3321 and like this. can you tell me how float, double are represented in memory ? like 3.220, -9876.87 ? i mean how bits are presented in memory ? like for singed number 32th bit is 1(from left to right). so can you please throw some light on this thing that how number before decimal and after decimal is represnted in float and double data type ? thanks a lot. :-)

can ypu explain a little bit ? i have already read it before your post, and again read it after your post. can you explain how 7 digits come in precision part ? that log part , can you explain ?

secondly, what is significance of that exponent bias ? please explain little bit. i have read 3-4 articles on that. thanks a lot in advance.

and how (0.25) in decimal is equal to (1.0)2^ (-2). shouldn't it be (1.0)2^(-1) ?

as per their method,

0.25 *2= 0.5  (0)
0.5 * 2 =1.00 (1)

so 0.25 = (10), but in IEEE format, it is shifted and it is looking like this, (1.0)*(2^-1) as it is shifted one right shift ? why is it (-2) ? please explain this thing. thanks in advance.

The representation is clearly described in the wikipedia articles:

sign | exponent | fraction

The sign is always a single bit. The exponent is a fixed size, but different for each of the precisions. The fraction also has a fixed size (different for each precision).

Using the double precision as an example you get sign = 1 bit, exponent = 11 bits, and fraction = 52 bits. The equation to determine the value encoded in this representation is straightforward and described on the wikipedia page. Essentially, you get something like: sign x (1 + fractional component) x 2^(unbiased exponent)

Note on the exponent bias: In order to avoid having to encode negative and positive values in the exponent (to support smaller and larger values, respectively) it is easier to assume that all values will be represented as positive numbers and that some value will be subtracted from that to provide the actual intended exponent value. This allows one to say 0 is the minimum value an exponent can represent and 2^N-1 is the maximum value.

Note on min/max of exponent: They (coupled with sign bit) have special meanings: NaN, Inf, +/- 0

The wikipedia article give a good explanation of the conversion from base 10 to base 2 and shifting for required normalization. I'm not sure where you are unclear in that example.

Walking through your example (0.25):

Starting with: 0.25
exponent = 0
fractional part to encode = 0.25


b[-1] = floor(0.25 * 2) = 0
new fractional part = (0.25 * 2) - b[-1] = 0.5
b[-2] = floor(0.5  * 2) = 1
new fractional part = (0.5  * 2) - b[-2] = 0.0 

fractional part = 0.0 terminates the calculation. 

Now, the fractional part (in base 2) is 

010000....

Combine that with the exponent and get

0.010000....

Normalize (left shift by 2) to get 1.0 x 2^-2
commented: Nice! :-) +12

L7Sqr gave a great synopsis of this. It is not a simple subject. If you want more info, see this wikipedia article: https://en.wikipedia.org/wiki/IEEE_754-1985

FWIW, my best college buddy, Bruce Ravenel, was the architect of the Intel 8087 math co-processor back in the late 1970's, the first implementation of the IEEE 754 standard in hardware. It is still part of our Pentium processors today. FWIW, the 8087 did all computations in 80 bits (not 64), resulting in incredibly accurate computations for the time, pretty much eliminating all rounding errors.

Ok. Continued reading indicates that the 8087 design is not part of the current Intel chip sets, though it has to have been a major contributor to the current math capabilities of Intel's processors. I think they are using 128 bits now... :-)

commented: I like to reward name-dropping :) +8

When Bruce gets back from his current vacation in BC, Canada I'll ask him what's what with current Intel designs... :-)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.