Hi. I have trouble with getting the following program to convert string to float type. What i need to do is this:

12.345e3 = 12345

e: = x 10^

I got it to do the basics. However, It doesn't work in case of negative numbers or exponents. While i completely understand it for exponents, i can't say the same for negative numbers. Also, I am pretty sure the code can be optimised. I just don't know how to do it.

#include <stdio.h>
#include <math.h>
#define SIZE 1024
int count(char S[SIZE],char c){
  int i=0, count=0 ;
  for( ; S[i] != 0 ; ++i )
        if( S[i] == c ) ++count ;
  return count ;

float atof( char* s )
float num = 0.0;
float kon  = 0.0;
int flag,flag2,i;
float exp;
while (*s) {
     if (*s >= '0' && *s <= '9') {
        num = 10.0 * num + (float)(*s - '0');
        kon *= 10.0;
     else if (*s == '.') kon = 1.0;
     else if(*s == 'e'){

num = num / (kon == 0.0 ? 1.0 : kon);
num = num / (kon == 0.0 ? 1.0 : kon);}

return num;
float f;
char a[SIZE];
scanf( " %s", a );
f = atof(a);
printf( "atof(\"%s\")=%f\n", a, f );
5 Years
Discussion Span
Last Post by pyTony

I see no provision for negative numbers (or any sign at all). Usually it follows logic similar to this (greatly simplified):

if (*s == '-' || *s == '+') {
    sign = *s++;

/* Convert the number */

if (sign == '-')
    value = -value;

I thought that the program's instant "assumption" would be that the number is positive (since I didn't do anything to make it "think") and in the case it's negative i tried to count '-' signs in the string (see below). The program converts the number properly, but for the sign.

Also there is the problem of potential - after 'e' which shouldn't change the sign of value in float, but the value itself.


i tried to count '-' signs in the string (see below)

That won't work, for the very reason you stated. You need to be strict in the format of the string, because non-digit characters are allowed, but in very specific locations and circumstances.

I'd recommend ditching what you have and starting over with nothing but a floating-point lexer. Take a string and break it down into the component parts while also validating the format. Once you can do that with 100% accuracy on the grammar you're using, the pieces can be parsed to extract a real value.


This is a simple implementation of a switch-based scientific notation parser I came up with. I put lots of comments in there so you can build upon it and do more with it. I have tested it and it works fine with all your example inputs.

double ScientificToDouble(char *in_String) {

	// Loop Variables
	int        Counter       = 0;
	int        Length        = strlen(in_String) + 1;

	// Flags and signs
	int        NegativeFlag  = 0;
	int        DecimalFlag   = 0;
	int        ExponentSign  = 0;  // -1 = Negative, 0 = None, 1 = Positive

	// Numerical Data
	int        Exponent      = 0;
	int        FinalDivision = 1;
	long       Digits        = 0;
	// Loop per each character. Ignore anything weird.
	for (;Counter < Length; Counter++) {

		// Depending on the current character
		switch (in_String[Counter]) {

			// On any digit
			case '0': case '5':
			case '1': case '6':
			case '2': case '7':
			case '3': case '8':
			case '4': case '9':

				// If we haven't reached an exponent yet ("e")
				if (ExponentSign == 0) {

					// Adjust the final division if a decimal was encountered
					if (DecimalFlag) FinalDivision *= 10;

					// Add a digit to our main number
					Digits = (Digits * 10) + (in_String[Counter] - '0');

				// If we passed an "e" at some point
				} else {

					// Add a digit to our exponent
					Exponent = (Exponent * 10) + (in_String[Counter] - '0');

			// On a negative sign
			case '-':

				// If we passed an 'e'
				if (ExponentSign > 0)

					// The exponent sign will be negative
					ExponentSign = -1;

				// Otherwise we are still dealing with the main number
					// Set the negative flag. We will negate the main number later.
					NegativeFlag = 1;

			// If we encounter some kind of "e"
			case 'e': case 'E':

				// Set the exponent flag
				ExponentSign = 1;

			// If we encounter a period
			case '.':

				// Set the decimal flag. We will start tracking decimal depth.
				DecimalFlag = 1;

			// We gladly accept all sorts of additional garbage.

	// If the negative flag is set, negate the main number
	if (NegativeFlag)
		Digits = 0 - Digits;
	// If the exponent is supposed to be negative, negate it now
	if (ExponentSign < 0)
		Exponent = 0 - Exponent;

	// Return the calculated result of our observations
	return ((double)Digits / (double)FinalDivision) * (double)(pow((double)10.0f, (double)Exponent));

Looks for me like this parser accepts sign inside the number, multiple decimal points and e, does not deal with multiple negative sign correctly and accepts all garbage like it says in comment. Otherwise it would smell like ready answer to homework, which you should learn to resist (I know giving leading hints can be more difficult than writing the code, but the OP learns next to nothing if you just do it for him.) I would at least say to OP: "Here is principle, you must fix it to be strict, by coming up with more test cases and fixing the code."

Edited by pyTony: n/a

This article has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.