I want to know how a microprocessor converts the binary digits into decimal equivalent. The processor only has the ability of manipulating 0's and 1's but, how these numbers are converted back into their decimal equivalent? Are they converted by another circuit? If so, then how and what it is called? Or If they are converted by the software?

Recommended Answers

All 2 Replies

What you need to see here is that the 'decimal equivalent' is actually a string of characters, for which there may be (in the case of a 64-bit two's-complement integer representation) 19 digits and a sign to represent. Depending on the character encoding, this may mean as much as 80 bytes (in the case of Unicode-32), though the most common encodings (ASCII, Latin-1, and UTF8) will use 1 byte per character for encoding the Arabic numerals.

Few if any CPU instruction sets have any direct support for any fixed character set; and while some (including the ubiquitous x86 and x86-64) have some very basic string manipulation instructions, these generally don't have anything to do with the character encoding - they are mostly limited to things like copying the string from one place in memory to another.

Now, your standard PC has some firmware support for a small number of character encodings, primarily for the use in displaying text on a text-mode screen, but that too is software (just software that is stored in the ROM chips). The conversions done by most language libraries are entirely in software, as they have to be able to handle different encodings. How the conversion would be done varies with the encoding.

Yeah, as Schol-R-LEA said, it's all done in software, and it's mainly just a matter of characters. The reality is, a computer never has to convert numbers into a decimal representation unless it is about to display it on the screen for a human being to read it, and human beings read characters. So, this is really just about converting a number into a sequence of characters, which is something for software to do. There would be no point in having dedicated modules on the CPU for doing this kind of conversion because it would just waste valuable real-estate (room on the CPU). Binary-to-decimal conversion is never going to be performance-critical to the point of needing special circuitry for, that's simply because it's always in order to face a human beings, and human beings are infinitely slower than the code that generates those decimal characters.

At some point, as a programmer, you end up forgetting that decimal numbers even exist... because the only thing that matters is binary numbers (and base-2 floating-point numbers), and sometimes, their hexadecimal equivalent (which are much more natural to use in computing).

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.