hi!
I have a problem about "range of data types ".I work on program that
require long digits .in "boland c++ 5.2 's help" range of long is 4294967295 .
but why this sample program doesn't work??!!

int main(){
    long a=4294967295,b;
    b=a;
    return 0;
}

CAN YOU HELP ME?
TNX.

coz its a border length the compliler will go on other side of the range as per long integer belongs .....as per ur code it will print -1 as 4294967295 - 4294967296 will yield as this only

That's likely the size of the UNSIGNED long; signed longs that are 32 bits are +/- 2 billion.

on a system with 32-bit longs, then, the range of an UNSIGNED long is 0 ... 0xFFFFFFFF but a signed long is 0x80000000 ... 0x7FFFFFFF.

That's likely the size of the UNSIGNED long; signed longs that are 32 bits are +/- 2 billion.

on a system with 32-bit longs, then, the range of an UNSIGNED long is 0 ... 0xFFFFFFFF but a signed long is 0x80000000 ... 0x7FFFFFFF.

OH!I make a mistake!I think that it is a misunderstanding!
I use " long" instead of "unsigned long" but my main program doesn't work yet!
I thougth that range of unsigned long is its problem.
why this program doesn't work?

void main(){
  unsigned long array[260920];
  unsigned long b=4294967290,a;
  //long b=2147483647,a;
  a=b;
  cout<<a;
  srand(time(0));
  for(int i=0;i<260920;i++)
      array[i]=rand()%4294967296;
  getch();
}

What makes you think it doesn't work? What doesn't work? Does it compile?

working in hexadecimal is much easier for large numbers! using it will usually mean that you WONT go over the boundaries for the compiler (as a number such as 28746262 you could be +- 1 out, whereas 0xFFFFFFF.... you know its right!) the integer value in the loop also seems a bit too high (260920 doesnt mean anything to me...) and a bit random. what is the program supposed to do? is it just plainly to have 260920 random longs, and if so what is that going to be used for!????!?

>but why this sample program doesn't work??!!
Integer literals are just that, integers. If you want a long literal, suffix it with L. If you want an unsigned long literal, suffix it with UL.

>void main(){
main returns an int, it always has and always will.

hi!
I have a problem about "range of data types ".I work on program that
require long digits .in "boland c++ 5.2 's help" range of long is 4294967295 .
but why this sample program doesn't work??!!

int main(){
    long a=4294967295,b;
    b=a;
    return 0;
}

CAN YOU HELP ME?
TNX.

#include <stdio.h>
#include <limits.h>

int main(void)
{
   printf("A long may have values between %ld and %ld.\n", LONG_MIN, LONG_MAX);
   printf("An unsigned long may have values between 0 and %lu.\n", ULONG_MAX);
   return 0;
}

/* my output
A long may have values between -2147483648 and 2147483647.
An unsigned long may have values between 0 and 4294967295.
*/

YMMV.

>but why this sample program doesn't work??!!
Integer literals are just that, integers. If you want a long literal, suffix it with L. If you want an unsigned long literal, suffix it with UL.

if the integer cannot be held in int then the compiler will automatically make it long and if the integer cannot be held as a long then it will make it Unsigned long. So as far as i know it is not necessary suffix it with L or UL atleast not in this context.

>but why this sample program doesn't work??!!
Integer literals are just that, integers. If you want a long literal, suffix it with L. If you want an unsigned long literal, suffix it with UL.

>void main(){
main returns an int, it always has and always will.

:?: I don't know what is the difference of "void main()" and "int main()"?

>if the integer cannot be held in int then the compiler will automatically make
>it long and if the integer cannot be held as a long then it will make it Unsigned long
Close. Without a suffix, the type for an integer constant will start at int, then go to long int. If the value isn't representable by long int, the behavior is undefined. At the very least "in this context", a U (or u) suffix should be used to force the range to be unsigned int and then unsigned long int. By the way, if the literal doesn't fit in the allowed range, with or without a suffix, the program is broken.

>I don't know what is the difference of "void main()" and "int main()"?
void main() is wrong, int main() is correct. That's the only difference that's relevant.

"The type of an unsuffixed integer constant is either int, long or unsigned long. The system chooses the first of these types that can represent the value."
--Taken from page no 118, chapter 3, "A book on C"(4th Edition) by AL KELLEY/IRA POHL. If this is right then from what i understand, if the value isn't representable by long int, the behavior is NOT undefined. If u think i had trouble catching what the author meant please do explain. If u think what the author said is not correct then please talk to him.

"The type of an unsuffixed integer constant is either int, long or unsigned long. The system chooses the first of these types that can represent the value."
--Taken from page no 118, chapter 3, "A book on C"(4th Edition) by AL KELLEY/IRA POHL.

Your book is wrong.

The type of an integer literal depends on its form, value, and suffix. If it is decimal and has no suffix, it has the first of these types in which its value can be represented: int, long int; if the value cannot be represented as a long int, the behavior is undefined. <snip octal and hexadecimal literals> If it is suffixed by u or U, its type is the first of these types in which its value can be represented: unsigned int, unsigned long int. If it is suffixed by ul, lu, uL, Lu, Ul, lU, UL, or LU, its type is unsigned long int.

A program is ill-formed if one of its translation units contains an integer literal that cannot be represented by any of the allowed types.

--Taken from Section 2.13.1 Integer Literals, paragraph 2, of the C++ standard.

well dont know much about C++ standards, but this is what was written in page no 193, The C programming Language(second edition) by K & R:

The type of an integer constant depends on its form, value and suffix. If it is unsuffixed and decimal, it has the first of these types in which its value can be represented: int, long int, unsigned long int. If it is unsuffixed, octal or hexadecimal, it has the first possible of these types: int,unsigned int, long int, unsigned long int. If it is suffixed by u or U, then unsigned int, unsigned long int. If it is suffixed by l or L, then long int, unsigned long int. If an integer constant is suffixed by UL, it is unsigned long.

So i think in C the type of integral constants are very much defined. U might want to go here http://www.lysator.liu.se/c/schildt.html
and see section 6.2.1.4. Here it says:
"Actually, unlike integers, such conversions are undefined...". It does give a hint that integer conversions are defined. I m really confused.

Could it be that it is one of those places where C differs from C++? Waiting to be enlightened.

working in hexadecimal is much easier for large numbers! using it will usually mean that you WONT go over the boundaries for the compiler (as a number such as 28746262 you could be +- 1 out, whereas 0xFFFFFFF.... you know its right!) the integer value in the loop also seems a bit too high (260920 doesnt mean anything to me...) and a bit random. what is the program supposed to do? is it just plainly to have 260920 random longs, and if so what is that going to be used for!????!?

the purpose of this program is making huge random array to sort it(several times to take noticeable time) by different sort algorithms (such as quicksort,bubblesort,mergesort,etc...)and compare the time that per sort take.and finally compare this time with complexity of algorithm.
How I do so?!
I think that larger rage cause I have less repetitive elemens in array.

>well dont know much about C++ standards
That's fine, but when I quote from either the C or C++ standard, it means I'm right. Just for future reference, because the standard is the ultimate authority on the matter, and no amount of linking to Schildt (not a good idea either way) or quoting K&R (which is never a bad idea) will change that.

>So i think in C the type of integral constants are very much defined.
C leaves open the possibility of extended integer types, but makes sure that in no way will an unsuffixed decimal constant resort to an unsigned type. The C standard gives a nice table where the allowed types for an unsuffixed decimal constant are int, long int, and long long int. The detail is thus:

If an integer constant cannot be represented by any types in its list, it may have an extended integer type, if the extended integer type can represent its value. If all of the types in the list for the constant are signed, the extended integer type shall be signed. If all of the types in the list for the constant are unsigned, the extended integer type shall be unsigned. If the list contains both signed and unsigned types, the extended integer type shall be signed or unsigned.

I've bolded the part that rules out a conversion to unsigned long, because the list for the relevant constant doesn't include unsigned types.

>the purpose of this program is making huge random array
If you want a really big array, allocate the memory dynammically and save your "stack". Of course, it makes more sense to divide the test up so that the weaker algorithms (bubble sort, insertion sort, selection sort) use a smaller array, then scale the results accordingly. Otherwise you'll be waiting for a long time for no good reason.

That's fine, but when I quote from either the C or C++ standard, it means I'm right. Just for future reference, because the standard is the ultimate authority on the matter, and no amount of linking to Schildt (not a good idea either way) or quoting K&R (which is never a bad idea) will change that.

>>Fair enough.

I finally got the enlightment i was waiting for. Almost every resource i found on the web confirmed that i was correct(i.e if it cannot be represented in long then it will go for unsigned long). There was no relevant info stated in ISO/IEC 9899-1999 standard about unsuffixed integral comstant, atleast i could not find any. However according to Sun's C user guide,
http://docs.sun.com/source/817-6697/tguide.html

With the -xc99=all (supports ISO/IEC 9899-1999), the compiler uses the first item of the following list in which the value can be represented, as required by the size of the constant:

1. int
2. long int
3. long long int

The compiler issues a warning if the value exceeds the largest value a long long int can represent.

With the -xc99=none (supports only ISO/IEC 9889-1990), the compiler uses the first item of the following list in which the value can be represented, as required by the size of the constant, when assigning types to unsuffixed constants:

1. int

2. long int

3. unsigned long int

4. long long int

5. unsigned long long int

So i guess my argument was based on ISO/IEC 9889-1990 whereas urs argument agrees with SO/IEC 9889-1999. So I guess u were right after all. Thank u.

well dont know much about C++ standards, but this is what was written in page no 193, The C programming Language(second edition) by K & R:


So i think in C the type of integral constants are very much defined. U might want to go here http://www.lysator.liu.se/c/schildt.html
and see section 6.2.1.4. Here it says:
"Actually, unlike integers, such conversions are undefined...". It does give a hint that integer conversions are defined. I m really confused.

Could it be that it is one of those places where C differs from C++? Waiting to be enlightened.

I think you may be onto something. Here is a little program I've used to let the computer tell me how big an integer, long or unsigned long can be. (This is the int version. A variant can be used to do a similar thing for float and double also.)

main()
{    int x=2 , y;
      y = x+1;
       while (y > x)
        {    x = 2*x+1;
              y = x+1;
        }
         printf ("x = %d  y = %d\n",x,y)
}

Now the intersting part: Compiled with the turbo C compiler this gives x = 32767, y = -32768, which is exactly as expected. Using bcc32, whether you compile this as a c program or as a cpp program you get x = 2147483647. So this compiler upgrades x and y from int to long when you'd get overflow otherwise. It could have upgraded further to unsigned long, but for reasons unknown to me it didn't. So apparently it's not exactly a difference between C and C++ but between C and C++ compilers. It would be informative to run this code on other C++ compilers.

I think you may be onto something. Here is a little program I've used to let the computer tell me how big an integer, long or unsigned long can be. (This is the int version. A variant can be used to do a similar thing for float and double also.)

Relying on undefined behavior as hardly good practice. These values are already available to you at compile time in the standard headers limits.h and float.h.

Relying on undefined behavior as hardly good practice. These values are already available to you at compile time in the standard headers limits.h and float.h.

Whether this is good pracrice or not is not the question I asked. The mystery is why the two compilers (from one company, no less) give different results. What is this telling us?

Whether this is good pracrice or not is not the question I asked. The mystery is why the two compilers (from one company, no less) give different results. What is this telling us?

Absolutely nothing -- since it is undefined behavior.

>What is this telling us?
It's telling us that you're struggling with simple concepts like "anything could happen".

As i said in my last post, there's a difference between C89 and C99 standards. Compilers that were made according to C89 standards will convert integers from int to long and then to unsigned long, whereas compilers that conform with C99 standards will go from int to long and then to long long, but not to unsigned long. Compilers will act differently based on the standards they were designed after. And this behaviour is not necessarily "undefined", it is rather "implementation-defined". There's a subtle difference between these two terms.

>there's a difference between C89 and C99 standards
Yes, there is. But you're missing the point.

>There's a subtle difference between these two terms.
There's also a subtle difference between the rules for integer literal conversions (that we've focused on for most of this thread) and signed integer overflow (displayed in murschech's horrid code). Signed integer overflow is always undefined, pick whatever C or C++ standard you want.

Narue thinks my code is "horrid". Gosh, I thought it was rather neat. I still do.
But maybe this bit of code will be more acceptable.

#include <stdio.h>

main()
{  printf("%d\n",sizeof(int));
}

If your not in the mood to run this code, I'll tell you what bcc32 gives as output: 4 . No wonder this compiler finds that the biggest integer is 2 billion +. Turbo C gives the result 2.

>Gosh, I thought it was rather neat.
Neat is not the same as good.

>But maybe this bit of code will be more acceptable.
Not really:

>main()
main returns an int. For C89, it's okay (but poor style) to omit the return value, for C99 it's a syntax error. The correct definition of main is:

int main(void)

>printf("%d\n",sizeof(int));
sizeof evaluates to a size_t value, which is not representable as a signed integer (which %d expects). Under C89 you can get around this by casting the result of sizeof to unsigned long, and using %lu:

printf("%lu\n", (unsigned long int)sizeof(int));

In C99, the z modifier lets you print a size_t:

printf("%zd\n", sizeof(int));

>}
If you were using C99 then this would be legal because main returns 0 by default. However, because you're not using C99 (see above) it's undefined behavior because you neglected to return a value.

>>If your not in the mood to run this code, I'll tell you what bcc32 gives as output: 4 . No wonder this compiler finds that the biggest integer is 2 billion +. Turbo C gives the result 2.

Turbo C runs in DOS which is a 16-bit OS. Hence it shows 2 bytes. On the otherhand, bcc5 will show 4 bytes bcos it runs in windows(or its a win32 console application). Typically short has 2 bytes and long has 4 bytes. The size of int will b either 2 or 4 depending on the system. Thats what i know.

>Gosh, I thought it was rather neat.
Neat is not the same as good.

>But maybe this bit of code will be more acceptable.
Not really:

>main()
main returns an int. For C89, it's okay (but poor style) to omit the return value, for C99 it's a syntax error. The correct definition of main is:

int main(void)

>printf("%d\n",sizeof(int));
sizeof evaluates to a size_t value, which is not representable as a signed integer (which %d expects). Under C89 you can get around this by casting the result of sizeof to unsigned long, and using %lu:

printf("%lu\n", (unsigned long int)sizeof(int));

In C99, the z modifier lets you print a size_t:

printf("%zd\n", sizeof(int));

>}
If you were using C99 then this would be legal because main returns 0 by default. However, because you're not using C99 (see above) it's undefined behavior because you neglected to return a value.

C99, C89, or C whatever are suggestions for compiler writers and they don't tell you what any compiler does. For instance, C99, you tell me, requires a return value for main and failure to return a value is a syntax error. But my compiler (bcc32) doesn't tag it as a syntax error, so why should I care that C99 does? This certainly doesn't make the computation incorrect. It's the compiler that I have to satisfy, not C99.

But let's get to the main point, namely, why my code, which I'll name "horrid" in your honor, works, even when there's undefined behavior. I'll enter the code again.

// HORRID
#include<stdio.h>

main()
{   int x=1, y=2;
     while(y > x)
      {    x = 2*x +1;
            y = x + 1;
      }
      printf("Max integer:  %d\n",x);
}

Now let's analyze it. Suppose that int is 1 byte. (This just makes the exposition easier. It clearly can be extended to any number of bytes).
Initially x = 1, which I'll write in binary as 00000001 (binary). Then
2*x + 1= 00000010 +00000001 =00000011 (binary). Then y will be well defined and > x, so the loop will continiue. The next value of x will be 00000111 (binary) and again y will be > x. Finally we get to x = 01111111, which is the LARGEST VALUE A SIGNED INTEGER CAN HAVE. Now when y is set = to x+1 it will have a value of who knows what because of undefined behavior, but it will still be an integer because y is declared to be an integer variable.Since x is at this point the largest possible integer the condition y>x will fail REGARDLESS OF WHAT VALUE IS ASSIGNED TO y, so the printf statement will execute. I hope this settles the matter.

You mentioned that the ranges of the int, long, etc, can be found in the header files. Do you know which header file?

>C99, C89, or C whatever are suggestions for compiler writers and they don't tell you what any compiler does.
They tell you what any compiler that claims to implement C must do.

>But my compiler (bcc32) doesn't tag it as a syntax error, so why should I care that C99 does?
Well, since that compiler doesn't implement C99, you should care because it's undefined behavior.

>This certainly doesn't make the computation incorrect. It's the compiler that I have to satisfy, not C99.
So you plan on using the same compiler for the rest of your life? What a narrow perspective you have. I pity you.

>works, even when there's undefined behavior
It could work, or it could wipe your hard drive. I personally couldn't care less what happens to your stupid ass, but I do care about you touting that code as "good" on this forum.

>Do you know which header file?
limits.h

><snip awful code and incorrect analysis> I hope this settles the matter.
Yes, it proves conclusively that you're an arrogant, ignorant retard who has no hope of becoming anything more than a mediocre script kiddie.

Reality check: Not everyone uses your compiler, your operating system, and your exact hardware configuration. When you realize this, you may actually have a chance of growing a brain.

So where is the error in my analysis?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.