0

Hi,

I'm trying to express a decimal to a binary floating point, but I'm having trouble with the normalization and what comes afterwards.

So for the example of 2, we would express it in terms of binary: 10. Then normalize it to get .1 * 2^2? How can I express this value in terms of sign/magnitude mantissa? (Please explain the methodology) Let's say we want it in 10 bits.

Thanks everyone.

2
Contributors
1
Reply
2
Views
10 Years
Discussion Span
Last Post by dwks
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.