how do u make it so when you enter in a number then outcomes a letter also to make it more convienent for the computer i have the list for each letter on the keyboard witch is assinged a given bianary code in ASCII

so if your type in 1000001
then out would come the letter capital A
and if you typed in 1000010
then out would come the letter capital B

Recommended Answers

All 25 Replies

there is a lot of answers on the web if you would use Google Look here for one of them.

dude i have already read about that stuff that wasnt my question at all

One way would be to:
accept input as a string.
convert string into an int
convert int into a char (or cast int into a char if all you want to do is display it)

One way would be to:
accept input as a string.
convert string into an int
convert int into a char (or cast int into a char if all you want to do is display it)

Yep it's pretty simple:

int main(int argc, char *argv[])
{
    int x = 1000001;
    cout << char(x) << endl;
    getchar();
}

Yep it's pretty simple:

int main(int argc, char *argv[])
{
    int x = 1000001;
    cout << char(x) << endl;
    getchar();
}

That isn't right -- it just assigns the number 1000001 decimal to an integer. binary digits have to be converted.

dude i have already read about that stuff that wasnt my question at all

The only way to answer your question is to convert the binary digits to decimal, as shown in those posts and other places you have probably read. Afterall, all characters, such as 'A', are nothing more than integers. 'A' = 64, 'B' = 65, etc. google for "ascii chart" and you will find them all. just simply convert the binary value to integer and you will have the answer to your question.

wrong bianary digets are directly converted into on and of signals that the microprocesser runs on, its already converted

if you convert binary to decimal the computer will have to convert decimal back to bianary so why not use bianary into instead also whant he told me worked just as i wanted it to also i am an electronic wiz so i know the inside of computters and how they work

take that into out i axadently messed up

plus no it converts and computer integer in to a vareable witch the char converts the bianary into the pixcels on the computher screen in a cirtisn way making the A

>>if you convert binary to decimal the computer will have to convert decimal back to bianary so why not use bianary

No it doesn't -- what you see on the screen is not the same as how it is represented internally by the program. The program only sees binary -- doesn't know a thing about visual stuff. The number 01000B is only visual -- what you see on your monitor. The program cannot deal with that directly, it has to be converted to its internal format. 01000B occupies 5 bytes of data (just count them -- '0', '1',. '0' ... is 5 digits). That all has to be converted down to the size of an int (4 bytes on 32-bit processor).

After the conversion, if you use a debugger and look at value of the 4 bytes beginning at the address of the integer you used, you will find that their vaue (in Intell computers) is 08 00 00 00 in Hex format, which is the original converted value.


I am well aware of how the hardware works. Unfortunately C and C++ languages do not have automatic way of converting a string that contains a binary numbers such as 100011 into an integer using the assignment operator, such as

int n = "0100011";<<< WRONG

but you can use strtol()

char *p
int n = strtol("010011",&p,2); // 010011 Binary == 19 Decimal

Now, if you use a scientific calculator you will find that 'A' (decimal 64) is 1000000 Binary. So you can either use strtol() function to convert 1000000B back to decimal as in my previous example, or do it the long way by using the formula in the links previously posted. But you CANNOT just simply assign 1000000 to a decimal as Server_Crash illustrated (maybe that's the reason for his hadle :cheesy:

What you say about binary handling in computers is correct Dragon.
But what Server_Crash did, was not int n = "100001"; he did int n = 1000001; // this is correct. What I don't get is how did

char( int n )

work. It gives you the correct character output. Is that a cast? or is there a function called char that takes an input of type int? It also can be a char constructor... :eek: This calls for Narue.

int n = 1000001 is NOT the binary representation of 1000001 Binary. Use your scienticic calculator -- 1000001 Decimal is 11110100001001000001 Binary! C and C++ cannot do auto convertion of binary numbers as it does decimal, hex and octal.

char( int n )

[edit]The above is invalid statement

that is more commonily written like this (a cast)

(char)n
Member Avatar for iamthwee
int main(int argc, char *argv[])
{
    int x = 1000001;
    cout << char(x) << endl;
    getchar();
}

This is wrong, like the Dragon has said

We know it's wrong cos if we try the below using server's logic we should get 'B'

Since 1000010=66

int main(int argc, char *argv[])
{
    int x = 1000010;
    cout << char(x) << endl;
    getchar();
}

We don't get 'B'

>i am an electronic wiz so i know the inside of computters

Is that a joke? I doubt you even know the inside of your own head.
:D

The problem was not with the binary to decimal conversion. It was with the usage of char( x ). Anyway if the usage is not correct, it is fine with me. The correct output for 100001 was pure coincidence I guess. Talk about luck.

As Ancient Dragon says:

A computer only knows binary, because all memory storage in a computer is determined by the state of the bit being on or off.

It does not magically store 'A' or a decimal '65' in memory; It stores it as 0b100001, which it then converts to base 10 or ASCII.

When you're coding anything, what you write is interpreted down into assembly and then machine (binary). This is what happens when you build and compile. The processor then reads in the machine language and processes it, and the results are fed back to you in a form you can read.

We appreciate that you're trying hard to teach yourself, but you shouldn't be assured that you are correct. Everyone can be wrong, but only those willing to accept that they are incorrect can advance.

Hm. I think I'll change my sig to that...

>>Everyone can be wrong

I've only been wrong once in my life -- that was when I thought I was wrong, but I was wrong about that :cheesy:

yes i know that is what i was trying to explain to you exsept i didnt know that the conversion was ascii, i knew about the conversion already,i thought ascii was the name for bionary codes that make up A before the conversion to A. pluss my use of words werent exactly that clear, but it was because it is verry hard for me to explain things but atleast this taught me to rethink my use of words. sorry about all the miss understanding, also ,back to the real subject.
if the use of char(x) wasent correct then why did it work when i put it through my compiler im confused now?

That isn't right -- it just assigns the number 1000001 decimal to an integer. binary digits have to be converted.

My bad dragon. I assumed it was correct since it worked ;)

whoa ok now that mackes me eaven more confused how can it be wrong when it works?

My bad dragon. I assumed it was correct since it worked ;)

It worked because it assigned the value 1000001 decimal to the integer, not 1000001 binary, which has a completly different decimal value.

Grunge, it's better to start again and go with conventional methods that meet the standards of people such as Ancient Dragon.

ASCII stands for American Standard Code for Information Interchange. It is what's commonly used by, well, Americans on systems.

There's also Unicode, which covers almost all writing systems (IE, Arabic, Thai, Greek, etc...) and EBCDIC, which is used on very few systems these days.

ASCII is what we English speakers tend to use because we don't need the greater character set provided by Unicode, and EBCDIC was derived from punchcard programming used in the 50's and 60's.

It's just a standard way to represent the characters, and as such, there are well-known ways to handle ASCII conversions to binary, hex, octal, and decimal numbers.

Sadly, I can't recall 'em off the top of my head.

,back to the real subject.
if the use of char(x) wasent correct then why did it work when i put it through my compiler im confused now?

I didn't say char(x) was not correct -- char(int x) is incorrect.

int x = 'A';
printf("%c", char(x)); <<< this is ok
printf("%c", (char)x); <<< and is same as this

ok now i totally fricken get this um i didnt eaven know that he did that i still wrote it the other way lol what weirdness all this is ok thanks you all byes

Member Avatar for iamthwee

>>I've only been wrong once in my life -- that was when I thought I was wrong, but I was wrong about that...

http://www.daniweb.com/techtalkforums/thread42252.html
;)

>My bad dragon. I assumed it was correct since it worked
Server_crash I thought your post was a late April's Fools... ;)


>The correct output for 100001 was pure coincidence I guess. Talk about luck.

Yup


>Sadly, I can't recall 'em off the top of my head.
Aww, shame that you almost put me to sleep... ;)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.