can anyone explain me the all precison and exponent, significant bits which double can handle ? i am not getting it. wiki says it has 53 bits significant bits and 11 exponent. am a little bit confused. can anyone explain me ? (in their own way) thanks alot. although i have used double alot but now , come up with some problem. thanks.

Tell us about what problem you have.
Click Here for some format info.

Read this article for detailed information:

FWIW, my close friend, Bruce Ravenel when he architected the Intel 8087 math processor, implemented the first IEEE 754 device in hardware, back in the late 1970's. Here is another wikipedia article about the Intel 8086/8087 development, citeing Bruce for his work on the architecture of the 8087:, and here is the article on the 8087:

@ddanbe what is 1023 in this ? i mean when i am printing the value of some double using compiler,executing code, i am getting only 17 digits accurately and there after getting zeroes ?

secondly, can you explain how can i infer from these things the max value double can hold?

checked by 10/3 and it give me 3.333333333333333300000.... (16 times after decimal ) and when i am doing this, 100/3, then me getting 33.3333333333333330000..... (15 times after decimal). thanks.


remember everything is in Base2, binary. so 2^11, the exponent, is a value between -1022 and +1023, but that is still the exponent of a base2 number, i.e. 2^-1022 to 2^1023 as a range, so in base10 it gives a total decimal range of 10^-383 to 10^384.

Similarly, 52 + 1 significant bits (fraction bits plus sign bit) is also in base2; when converted to base10, you get a total decimal precision of ~16 digits, plus or minus 1 digit, for a decimal range of 15-17 digits depending on how the rounding works out.

the various links given above explain the detail, but i think it's just remembering the several conversions between base2 and base10 that is tripping you up. It should all make sense to you once you remember that.