Well I ran a piece of code under a debugger. The value I entered was 0.0000124568 but the value stored by the program which I saw using the debugger was 1.2456800000000001e-005. You can see an unnecessary '1' marked in red. Since this is a problem due to the representation of floating point decimal numbers in binary, the only way I see to get around it is to enter the double value as a char array and search for the decimal point in the array. But even then you will have to know the maximum length of a value that can be entered.
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague
WolfPack 491 Posting Virtuoso Team Colleague