#include <stdio.h>

int main (void)
{
	float a;
	a = 7/3;
	printf ("%f\n", a);
}

Result : 2.000000
Why is gcc compiler printing 0 after decimal?

Recommended Answers

All 6 Replies

7/3 is evaluated in an integer context, which means all precision is truncated to 0 when you assign to the float variable. Make either one of the operands floating-point and you'll get what you want:

a = 7.0f / 3;

If you want to print out the exact number of decimal places then manipulate a decimal number between %f

printf ("%.1f",a); //will print out a with 1 decimal digit

Narue is right..

On evaluating
a = 7/3
a gets a value of 2 , which on converting to float becomes 2.000..

You can try this : printf ( " %f \n " , 7 / 3);

This will return 2.3333..

You can try this : printf ( " %f \n " , 7 / 3);

This will return 2.3333..

Please explain how that's supposed to work. The expression is still in an integer context, which means the result is an integer type. You then pass this integer type to printf() with the %f specific that expects a floating-point type.

So aside from the original bug, you've introduced another: passing the wrong type to printf(). The output is more likely to be 0.000000, but since this is undefined behavior in the first place the behavior is completely unpredictable. However, 2.333333 is the least likely result due to stacking undefined behavior on top of evaluating 7/2 in integer context.

Oh ya..

it should be printf ( " %f \n " , 7.0 / 3);

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.