In the following program I'm trying to set the first, second, & third character values stored in processTheseThree [0], [1], & [2] to the int values D1, D2, & D3. When I tested my code with the line
std::cout << processTheseThree[firstSecondThird] << std::endl; I was getting the correct values but when I try to assign those values to my D3 int variables I'M getting really hight number like 48-51, even when I only input 123. Also, don't worry about the recourses.h file, it just hold an array of strings that I'm not currently using.

#include <iostream>
#include <stdlib.h>
#include <string.h>
#include "resources.h"

int main(int argc, char *argv[])
{
    char inputNumber[25] = "000000000000";
    strcat(inputNumber, argv[1]);

std::cout << inputNumber << std::endl;

    char *input = inputNumber + strlen(inputNumber) - 12;

std::cout << input << std::endl;

    int groups = 4;
    char processTheseThree[4] = "";
    char *theCharIncrementer = input + 9;

    int firstSecondThird = 0;
    while(groups > 0)
    {
        strncpy(processTheseThree, theCharIncrementer, 3);
//      processTheseThree[theCharIncrementer + 4] = '\0';
        //processTheseThree[3] = '\0';
        //std::cout << processTheseThree << std::endl;

        //while(firstSecondThird < 3)
        //{
            //std::cout << processTheseThree[firstSecondThird] << std::endl;
            int D1 = (int)processTheseThree[0];
            int D2 = (int)processTheseThree[1];
            int D3 = (int)processTheseThree[2];

            std::cout << D1 << std::endl << D2 << std::endl << D3 << std::endl;

            //firstSecondThird++;
        //}

        theCharIncrementer -= 3;
        groups--;
    }


    return 0;
}
/*END*/

To be honest I'm having trouble making any sense of the code, but I would suggest that what you are outputting
is the decimal values of the characters contained in processTheseThree.

Edited 3 Years Ago by Suzie999

So you're taking a char, processTheseThree[0] or something like that, which is '1', and then you're telling the compiler to interpret it as an int?

I would expect that to come out as the int value 49, because the char '1' is stored as the value 49 in memory - http://www.asciitable.com/

You've got a value in memory - 49. If you interpret this value as a char, you get out the char '1' because (as you can see on that link) the char '1' is stored as the number 49. When you interpret that value as an int, well it's going to be 49, because that's the number that's actually there in memory.

Edited 3 Years Ago by Moschops

The trick to turn a single numeric character (digit) into an integer value is to simply subtract the value of the character '0' because it is almost certain (in all reasonable character sets) that digit characters have values that are sequential (e.g., in the ASCII table, the characters 0, 1, 2, 3... follow each other directly). And "char" is already an integral type, so, no explicit conversion needed. Try this:

        int D1 = processTheseThree[0] - '0';
        int D2 = processTheseThree[1] - '0';
        int D3 = processTheseThree[2] - '0';

Thanks mike. That worked, I just don't understand how or why. How does supctracting the char '0' from the ASCII value of [0] which is 48 if [0] is 1 get me the input value?

The int value of the char '1' is 49. The int value of the char '0' is 48. So when you subtract '0' from '1', you subtract the value 48 from 49, and what do you get? The int value 1. It does not get you the "input value"; the "input value" was a char, and you're getting an int here. It gets you the int value that the char is commonly used to represent.

If you were using some other crazy character set, so long as the chars '0' to '9' are sequentially represented by numbers, it won't matter if '0' is 48 or 100 or a billion, because '1' will always be just one value higher (i.e. it'd be 49, or 101, or a billion and one).

Edited 3 Years Ago by Moschops

Well, characters are just one byte integer numbers which are treated in a special way by streams (file, console, etc.) and displays (the console, text editors, etc.) so that they turn into specific symbols (letters) when displayed on the screen and treat certain special characters (new-line, carriage return, etc..). In C/C++, the char type is just that, an integer type (with values between -128 and 127) with some special semantics.

Now, in ASCII encoding (and most other encodings), the 0-9 digits have values of 48-57. This means that the character '0' is actually equal to the number 48. So, if the digit that is represented with the processTheseThree[0] character is the character '5', then it is equal to the integer value 53, and so, subtracting 48 from it will give you the integer value of 5.

The reason for subtracting the character '0' instead of the integer value 48 is because the platform is not required to used ASCII encoding (and often doesn't, it's usually a slight variation of ASCII). So, you can't be sure that '0' is actually 48, but you can be pretty sure that numerical digits and lower-case and upper-case letters of the basic alphabet will be all placed sequentially in the encoding table. In other words, you can rely on the fact that the character '5' will always have an integer value that is 5 increments after the integer value of '0', i.e., 5 == '5' - '0' is always true (not strictly required to be true AFAIK, but I don't think there is any platform for which it isn't).

Another way to see it is, when you write '0', you are telling the compiler "give me the one byte integer number which looks like '0' when interpreted as a character by this platform".

In C/C++, the char type is just that, an integer type (with values between -128 and 127) with some special semantics.

The default char type may be an unsigned integral type.

It is implementation-deļ¬ned whether a char object can hold negative values. - IS

.
.
.

'5' - '0' is always true (not strictly required to be true AFAIK ..)

Strictly required to be true in every conforming implementation.

In both the source and execution basic character sets, the value of each character after 0 in the above list of decimal digits shall be one greater than the value of the previous. - IS

.
.
.

you can be pretty sure that numerical digits and lower-case and upper-case letters of the basic alphabet will be all placed sequentially in the encoding table.

Lower-case and upper-case letters need not contiguous in a conforming encoding; for instance the EBCDIC encoding (still in use today, in mainframes and minis from IBM and a few others).

Comments
thanks for the precisions!
This article has been dead for over six months. Start a new discussion instead.