I want to know how a microprocessor converts the binary digits into decimal equivalent. The processor only has the ability of manipulating 0's and 1's but, how these numbers are converted back into their decimal equivalent? Are they converted by another circuit? If so, then how and what it is called? Or If they are converted by the software?
Jump to Post
What you need to see here is that the 'decimal equivalent' is actually a string of characters, for which there may be (in the case of a 64-bit two's-complement integer representation) 19 digits and a sign to represent. Depending on the character encoding, this may mean as much as 80 …
All 2 Replies
We're a friendly, industry-focused community of 1.21 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.