cout<<(char)290<<endl;

I get " as the answer.

cout<<(char)20000<<endl;

Here I don't get anything.

When char can take integer values from -128 to 127 and both 290 and 20000 are welll out of this range,
then why is there an output in the first case?

When char can take integer values from -128 to 127 and both 290 and 20000 are welll out of this range

First, there's no requirement for char to have that particular range. Second, when char is signed, overflow results in undefined behavior. Your code is very dangerous.

In your case, the out of range values are wrapping around until within range, which you can easily see by printing (int)(char)290 and (int)(char)20000 . Since 290 is wrapping to 34 ('"' in ASCII and the low octet of Unicode), I suspect 20000 is wrapping to 32, which corresponds to the ' ' character. So something is being printed, you just don't see it. A better test would show you the boundaries:

cout<<'>'<< (char)20000 <<"<\n";

Thank You

Okay,so 32 corresponds to ' '.

How will make a difference if it were an unsigned char?

unsigned char x;
x=20000;
cout<<x;

It still gives ' '.
Since the range(where everything is definite)for an unsigned char is 0-255,shouldn't the wrapping up result in something else?

And about the test....
Your code will give a line:
> <
Is that what you intended to me try?:?:

Thank You.please reply

How will make a difference if it were an unsigned char?

In that case the wrap around is perfectly safe and well defined. Note that vanilla char might be either signed or unsigned.

Is that what you intended to me try?

Yes. Notice the space between the two boundaries? That's what you're printing, and showing the boundaries with > and < allows you to see it.

So 256 will be same as 0,257 will be the same as 1,correct?
And how is it in case of signed chars?(is there a rule at all?)
How does it wrap up?And why unsafe?

About the test,okay,I got it.
Thanks

So 256 will be same as 0,257 will be the same as 1,correct?

Assuming char is eight bits, yes.

And how is it in case of signed chars?(is there a rule at all?)

Undefined behavior. Google it.

Thank You

Perhaps I am mistaken but I thought the point of a cast was to unconditionally force the bits of one POD type to another. Where does this "wrapping around" behavior enter the scene?

two bytes:
1 0 1 0 _ 1 0 1 0 - 1 0 1 0 _ 1 0 1 0

now casted down to a single byte, as far as I knew would be:

1 0 1 0 _ 1 0 1 0

using the least significant 8 bits.


:

#include <iostream>
#include <limits>
#include <bitset>
#include <cassert>
#include <string>

using namespace std;

int _tmain(int argc, _TCHAR* argv[])
{
	assert(sizeof(short) == 2);
	assert(sizeof(signed char) == 1);

	string bigBitString = "0111111100000000";
	string smallBitString = "00000000";
	bitset<16> big;
	bitset<8> small;

	for(string::size_type i = 0; i < bigBitString.size(); i++)
	{
		big[i] = bigBitString[i] == '1' ? 1 : 0;
	}

	for(string::size_type i = 0; i < smallBitString.size(); i++)
	{
		small[i] = smallBitString[i] == '1' ? 1 : 0;
	}
	short sBig = (short)big.to_ulong();
	signed char cSmall = (signed char)small.to_ulong();
	cout << "Sizeof short: " << sizeof(short) << endl;
	cout << "Sizeof signed char: " << sizeof(signed char) << endl;
	cout << "Short bits: " << big.to_string() << endl;
	cout << "Signed char bits: " << small.to_string() << endl;

	signed char casted = (signed char) big.to_ulong();
	bitset<8> castedBits(casted);
	cout << "Signed char bits after down-casting: " << castedBits.to_string() << endl;

	return 0;
}

Isn't the entire point of specifying a down-cast as such to strip off the amount of least-significant bits and put them into a smaller variable, without overflowing anything?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.