how do u make it so when you enter in a number then outcomes a letter also to make it more convienent for the computer i have the list for each letter on the keyboard witch is assinged a given bianary code in ASCII
so if your type in 1000001
then out would come the letter capital A
and if you typed in 1000010
then out would come the letter capital B
dude i have already read about that stuff that wasnt my question at all
The only way to answer your question is to convert the binary digits to decimal, as shown in those posts and other places you have probably read. Afterall, all characters, such as 'A', are nothing more than integers. 'A' = 64, 'B' = 65, etc. google for "ascii chart" and you will find them all. just simply convert the binary value to integer and you will have the answer to your question.
wrong bianary digets are directly converted into on and of signals that the microprocesser runs on, its already converted
if you convert binary to decimal the computer will have to convert decimal back to bianary so why not use bianary into instead also whant he told me worked just as i wanted it to also i am an electronic wiz so i know the inside of computters and how they work
>>if you convert binary to decimal the computer will have to convert decimal back to bianary so why not use bianary
No it doesn't -- what you see on the screen is not the same as how it is represented internally by the program. The program only sees binary -- doesn't know a thing about visual stuff. The number 01000B is only visual -- what you see on your monitor. The program cannot deal with that directly, it has to be converted to its internal format. 01000B occupies 5 bytes of data (just count them -- '0', '1',. '0' ... is 5 digits). That all has to be converted down to the size of an int (4 bytes on 32-bit processor).
After the conversion, if you use a debugger and look at value of the 4 bytes beginning at the address of the integer you used, you will find that their vaue (in Intell computers) is 08 00 00 00 in Hex format, which is the original converted value.
I am well aware of how the hardware works. Unfortunately C and C++ languages do not have automatic way of converting a string that contains a binary numbers such as 100011 into an integer using the assignment operator, such as
int n = "0100011";<<< WRONG
but you can use strtol()
int n = strtol("010011",&p,2); // 010011 Binary == 19 Decimal
Now, if you use a scientific calculator you will find that 'A' (decimal 64) is 1000000 Binary. So you can either use strtol() function to convert 1000000B back to decimal as in my previous example, or do it the long way by using the formula in the links previously posted. But you CANNOT just simply assign 1000000 to a decimal as Server_Crash illustrated (maybe that's the reason for his hadle :cheesy:
int n = 1000001 is NOT the binary representation of 1000001 Binary. Use your scienticic calculator -- 1000001 Decimal is 11110100001001000001 Binary! C and C++ cannot do auto convertion of binary numbers as it does decimal, hex and octal.
The problem was not with the binary to decimal conversion. It was with the usage of char( x ). Anyway if the usage is not correct, it is fine with me. The correct output for 100001 was pure coincidence I guess. Talk about luck.
A computer only knows binary, because all memory storage in a computer is determined by the state of the bit being on or off.
It does not magically store 'A' or a decimal '65' in memory; It stores it as 0b100001, which it then converts to base 10 or ASCII.
When you're coding anything, what you write is interpreted down into assembly and then machine (binary). This is what happens when you build and compile. The processor then reads in the machine language and processes it, and the results are fed back to you in a form you can read.
We appreciate that you're trying hard to teach yourself, but you shouldn't be assured that you are correct. Everyone can be wrong, but only those willing to accept that they are incorrect can advance.
yes i know that is what i was trying to explain to you exsept i didnt know that the conversion was ascii, i knew about the conversion already,i thought ascii was the name for bionary codes that make up A before the conversion to A. pluss my use of words werent exactly that clear, but it was because it is verry hard for me to explain things but atleast this taught me to rethink my use of words. sorry about all the miss understanding, also ,back to the real subject.
if the use of char(x) wasent correct then why did it work when i put it through my compiler im confused now?