0

if I write the following code..

float a =1.3;
double b=1.3;
if(a>b)
printf("hii");
else 
printf("hello");

the o/p comes hii...but if I change the values to 1.7 the output comes hello...can any1 explain the reason????if precision is to be taken into account then everytime double should be greater..

3
Contributors
2
Replies
5
Views
5 Years
Discussion Span
Last Post by rubberman
1

Yes, it's precision, and double is simply more accurate. That does not equate to bigger.

For example: double may be 56.42735816, float may be 56.42736 -- float is bigger.

0

Exactly as WaltP says. You cannot compare a float with a double unless you down-cast the double, and even then it may not do what you think. Given the 64-bit systems we are mostly using these days, just use doubles if you need to compare computational results of floating point values. The only exception to that is if you need more than 64 bits of precision, in which case you can use long doubles if your compiler supports them.

This question has already been answered. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.