if I write the following code..

float a =1.3;
double b=1.3;
if(a>b)
printf("hii");
else 
printf("hello");

the o/p comes hii...but if I change the values to 1.7 the output comes hello...can any1 explain the reason????if precision is to be taken into account then everytime double should be greater..

Recommended Answers

All 2 Replies

Yes, it's precision, and double is simply more accurate. That does not equate to bigger.

For example: double may be 56.42735816, float may be 56.42736 -- float is bigger.

Exactly as WaltP says. You cannot compare a float with a double unless you down-cast the double, and even then it may not do what you think. Given the 64-bit systems we are mostly using these days, just use doubles if you need to compare computational results of floating point values. The only exception to that is if you need more than 64 bits of precision, in which case you can use long doubles if your compiler supports them.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.