What and when you would use Decimal vs Float Type?

Float types are just the hardware's floating point data type. Usually this is an IEEE single (4-bytes), double (8-bytes), or extended (10-bytes).

The problem with these is that they approximate numbers. Hence, they are not very exact. They lack precision and they are susceptible to the influences binary arithmetic.

The python decimal module gives a floating point class called Decimal which stores a number exactly. The tradeoff is that it is not as fast to use as a float. In most cases it is not so important to be so exact, so the float works just fine and there is no need to use Decimal.

However, in scientific computing exactness is often very important, so the Decimal type is very handy whenever it makes a difference between, say 1.000000000 and 1.000000001 Hope this helps.

commented: very nice explanation +3
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.