I am going through some book assignments.. and I am running into some terminology that doesn't make sense to me..

Impelement this class...

Class Cis Float
Cisfloat a("325.12315");
a+b
a * b


14 (exp # bits)
113 (sigment # bits)

10000........0 bias

what does the author mean by, "exp # bits", "sigment # bits" and "10000.....0 bias..?" I have been programming for a long time and have never ran into this terminology. I am tempted just to disregard these as an attempt to discract the programmer from the main objective as I could implement this class easily without regard to these miscelleneous terms.

please provide clarification if possible/applicable.

-davo

Salem commented: A rare quality question on DW - thanks - salem +6

http://en.wikipedia.org/wiki/Floating_point
The number of bits you dedicate to the exponent determine the range of floating point numbers you can support.

The number of bits in the significand (I don't know what sigment could mean otherwise) determines the accuracy of any individual floating point number.

The bias is an offset applied to the exponent, which I think is primarily to save having to store a sign bit for the exponent (could be wrong on that).

> I could implement this class easily without regard to these miscelleneous terms.
An inbuilt double datatype has say 11 exp bits and 52 significand bits. Given some very large numbers, or some very precise numbers, your simple implementation would fail.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.