Start New Discussion within our Software Development Community

I am going through some book assignments.. and I am running into some terminology that doesn't make sense to me..

Impelement this class...

Class Cis Float
Cisfloat a("325.12315");
a+b
a * b


14 (exp # bits)
113 (sigment # bits)

10000........0 bias

what does the author mean by, "exp # bits", "sigment # bits" and "10000.....0 bias..?" I have been programming for a long time and have never ran into this terminology. I am tempted just to disregard these as an attempt to discract the programmer from the main objective as I could implement this class easily without regard to these miscelleneous terms.

please provide clarification if possible/applicable.

-davo

Comments
A rare quality question on DW - thanks - salem

http://en.wikipedia.org/wiki/Floating_point
The number of bits you dedicate to the exponent determine the range of floating point numbers you can support.

The number of bits in the significand (I don't know what sigment could mean otherwise) determines the accuracy of any individual floating point number.

The bias is an offset applied to the exponent, which I think is primarily to save having to store a sign bit for the exponent (could be wrong on that).

> I could implement this class easily without regard to these miscelleneous terms.
An inbuilt double datatype has say 11 exp bits and 52 significand bits. Given some very large numbers, or some very precise numbers, your simple implementation would fail.

This article has been dead for over six months. Start a new discussion instead.