I am working in an older version of Python (2.6, I believe). I cannot update to the latest version because it is not currently compatible with the other program I need the code to run with so, unfortunately, the "decimal" module is not available to me (as far as I know).
I am running a program that pulls numbers (monetary values) out of a csv file and does some basic math with them before sending the values to the other program being used.
I cannot use int values because both dollars and cents are used, so I have to be able to use two decimal places.
The float functions are difficult to use because the numbers are stored in the memory differently than they appear on screen, so something like 0.1 + 0.2 yields some decimal with too many decimal places for my purposes. I cannot find any information on setting precision to these numbers and since it is a floating point number, I don't really expect to.
I also need to be able to have a function to add commas in the appropriate places for numbers above 999.
Any ideas on the best solution here?

You can multiply the number by 100 and use integers, or truncate to 2 decimal places when printing floats. On most modern computers a float is a binary number as defined by the IEEE Floating Point Arithmetic Standard and represents numbers 1,000,000,000,000 to 0.0000000000000001 (at a minimum). Floats (actually a C double) have plenty of precision for 2 decimal monetary numbers.

x = 0.1+0.2
print repr(x)
y = "%3.2f" % (x)
print y

x = 0.1+0.2+0.005
print repr(x)
y = "%3.2f" % (x)
print y

Edited 5 Years Ago by woooee: n/a

I see that this might be exactly the solution I'm looking for but, I apologize, I'm fairly new to Python so I have a question about what exactly the

y = "%3.2f" % (x)

line of your code says/does. You used the same line in both examples but got different precisions. Is this because of the

print repr(x)

statement?
I just want to make sure I understand it enough that I can edit it in the future, if I need to.

Thanks!

Edited 5 Years Ago by rhuffman8: n/a

Actually, the first option (keep track of pennies using a (long) integer) is a better choice almost always. If you use floats representing dollars, you are subject to rounding error even if being very careful. This is because a penny ($0.01 decimal) is in fact an infinite repeating binary fraction, but the register is finite, so there is always a little difference between the intended value and the value actually stored... and worked with in the next step which leads to more error, etc. Whereas if you just store an integer count of pennies, there is no rounding error, no fractions to deal with etc. You can get a nice string value (without commas) using this

def dollarStringFromPennies(p):
  stringpennies = '%d'%p
  return '$%s.%s'%(stringpennies[:-2],stringpennies[-2:]

If you want to make a class to handle all this, it is pretty easy, and you can even provide a __str__(self) method that does the above.

Also the decimal module exists in python 2.6. According to the source code, it was written in 2004 for python 2.3. If you need it, use it, either by upgrading to a version of python with the decimal module, or by using the attached version

This question has already been answered. Start a new discussion instead.