if we print a float using %d garbage value is printed....but if we scan a float value using %d & int...the value gets truncated...What is the reason behind it...

float is 8 bytes and int 4 bytes if you say %f you expect 8 bytes and type float

#include <stdio.h>

int main()
{
   int iInt = 1;

   printf("int %d float %f",iInt, (float)iInt);
   return 0;
}

btw i suggest you read char buffer in and then convert it to float or int

Edited 4 Years Ago by Sokurenko: more

%d %f etc are these are part of format string. They uslually tells the compiler to represnt the value as integer if d is there, as float if f is there. thet are specifiers for the conversions.

Edited 4 Years Ago by I_m_rude

What is the reason behind it...

If you lie to printf(), you get what you deserve. When you tell printf() to expect an int, it treats whatever you pass like an int. When you tell printf() to expect a float, it treats whatever you pass like a float. If whatever you pass doesn't have a compatible byte representation, don't be surprised when you get garbage.

then why does scanf returns 1 in following case if we give a floating point variable as input..

int main()
{
int i;
printf("%d",scanf("%d",&i));
return 0;
}

if I give a floating point value as input the output comes 1 but if I give a character as input the value returned is 0...

then why does scanf returns 1 in following case if we give a floating point variable as input..

Because scanf() reads until the first invalid character, and if there were valid characters before that such that a conversion can be performed the the conversion will succeed. The first part of a floating-point value up to the radix is a valid integer.

This article has been dead for over six months. Start a new discussion instead.