I'm trying to do a project that uses the trigonometric functions, but I'm having some trouble with them.

When I type in this code:

import math
print 0.5+math.cos((2*math.pi)/3)

I get the answer 2.22044604925e-016. That's obviously not right. cos((2*pi)/3) is -0.5, so I should get the answer 0. Can anybody help me?

Recommended Answers

All 7 Replies

Floating point algorithms in all computer languages suffer from a small error as the floating point world meets the binary world. Representing -0.5 in just a sequence of 1 and 0 is not possible with true accuracy. Look at this ...

import math

# use a list to show the small binary error
mylist = [math.cos((2*math.pi)/3)]
print mylist  # [-0.49999999999999978]

# you could use round() to 15 digits
mylist = [round(math.cos((2*math.pi)/3), 15)]
print mylist  # [-0.5]

Floating point algorithms in all computer languages suffer from a small error as the floating point world meets the binary world. Representing -0.5 in just a sequence of 1 and 0 is not possible with true accuracy. Look at this ...

import math

# use a list to show the small binary error
mylist = [math.cos((2*math.pi)/3)]
print mylist  # [-0.49999999999999978]

# you could use round() to 15 digits
mylist = [round(math.cos((2*math.pi)/3), 15)]
print mylist  # [-0.5]

I don't exactly agree that -0.5 is not representable in binary because 0.5 is 2**(-1). On the other hand the binary 2*math.pi/3 is not exactly equal to the mathematical 2*pi/3 .

In fact you can obtain the binary representation of floating points numbers using the module bitstring available in pypi

>>> from bitstring import BitString
>>> bs = BitString(float=-0.5, length=64)
>>> bs.bin
'0b1011111111100000000000000000000000000000000000000000000000000000'

This representation of -0.5 is the same as the IEEE 754 representation of the
floating point number on 64 bits. To understand its meaning, go to this online
application http://babbage.cs.qc.edu/IEEE-754/Decimal.html and enter -0.5 in the first field, then click 'not rounded' and read the representation and its meaning below.

Also note that some native support exist in the python interpreter: sys.float_info contains constants read in your system's float.h header, and the float class has a method hex() which returns a hexadecimal representation of the number.

A nice exercise would be to write a class which displays the same information as the page http://babbage.cs.qc.edu/IEEE-754/Decimal.html about a floating point number and its binary representation :)

A more subtle question is how to make sure that you obtain the binary representation actually used by your processor ? Apparently, some processors use 80 bits and not 64 ...

Wit this:

import math

print(math.cos((2*math.pi)/3))         # -0.5
print(0.5 + math.cos((2*math.pi)/3))   # 2.22044604925e-16
print(-0.5 + math.cos((2*math.pi)/3))  # -1.0

x = math.cos((2*math.pi)/3)
print(x)        # -0.5
print([x])      # [-0.4999999999999998]
print([-0.5 + x])   # [-0.9999999999999998]

Strange?

Wit this:

import math

print(math.cos((2*math.pi)/3))         # -0.5
print(0.5 + math.cos((2*math.pi)/3))   # 2.22044604925e-16
print(-0.5 + math.cos((2*math.pi)/3))  # -1.0

x = math.cos((2*math.pi)/3)
print(x)        # -0.5
print([x])      # [-0.4999999999999998]
print([-0.5 + x])   # [-0.9999999999999998]

Strange?

It is not strange, because printing a float x prints str(x) in fact, while printing a list invokes the repr() of the list items. The method float.__str__() yields less decimals than float.__repr__, so the floating number is rounded

>>> x = 1.0/3
>>> str(x)
'0.333333333333'
>>> repr(x)
'0.33333333333333331'
>>>
>>> x = -0.49999999999999
>>> str(x)
'-0.5'
>>> repr(x)
'-0.49999999999999001'

About the actual bits used by C python, the PyFloatObject structure from floatobject.h uses a C double to store the number, so the bits are the same as the bits of the C type double. I wrote a small functions to extract the actual bits using module ctypes

from ctypes import *
from binascii import hexlify

def double_to_bits(f):
    cf = c_double(f)
    n = sizeof(cf)
    assert not n % 2
    pc = cast(pointer(cf), POINTER(c_char))
    h = ''.join(pc[i] for i in xrange(n))
    return bin(int(b"1" + hexlify(h), 16))[3:]

print double_to_bits(-0.5)

""" my output (this should be machine dependent) -->
0000000000000000000000000000000000000000000000001110000010111111
"""

Interestingly, this yields the same result as using struct.pack() :

>>> from binascii import hexlify
>>> import struct
>>> p = struct.pack("d", -0.5)
>>> print bin(int(b"1" + hexlify(p), 16))[3:]
0000000000000000000000000000000000000000000000001110000010111111

Also notice that on my machine, this representation differs from the IEEE 754 representation referred to above. How is it coded ? It would be worth having a python module which recognizes the different floating point formats ...

commented: Thank you, that explains a lot +13

To complete the previous post, the difference with the IEEE 754 representation is due to the endianness of the system. One can recover the standard representation by reverting the bytes:

from ctypes import *
from binascii import hexlify
import sys

def double_to_bytes(f):
    cf = c_double(f)
    n = sizeof(cf)
    assert not n % 2
    pc = cast(pointer(cf), POINTER(c_char))
    h = ''.join(pc[i] for i in xrange(n))
    s = bin(int(b"1" + hexlify(h), 16))[3:]
    return [s[i:i+8] for i in xrange(0, len(s), 8)]

L = double_to_bytes(-0.5)

print "This system uses %s endian byteorder" % sys.byteorder
print " ".join(L)
print " ".join(reversed(L))

""" my output -->
This system uses little endian byteorder
00000000 00000000 00000000 00000000 00000000 00000000 11100000 10111111
10111111 11100000 00000000 00000000 00000000 00000000 00000000 00000000
"""

Sorry I haven't replied to this sooner. Thank you for all the help with this. My program is working excellently now.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.