Hi,

I'm trying to express a decimal to a binary floating point, but I'm having trouble with the normalization and what comes afterwards.

So for the example of 2, we would express it in terms of binary: 10. Then normalize it to get .1 * 2^2? How can I express this value in terms of sign/magnitude mantissa? (Please explain the methodology) Let's say we want it in 10 bits.

Thanks everyone.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.