I need to know which of these three copies of code is considered best. I also need to which one is more efficient memory wise and which is more efficient time wise. Thanks:
enums

enum MyEnum{EnumTypeOne=1,EnumTypeTwo=2,EnumTypeThree=4,EnumTypeFour=8};

macros

#define EnumTypeOne 1
#define EnumTypeTwo 2
#define EnumTypeThree 4
#define EnumTypeFour 8
typedef char MyEnum;

constants

const char EnumTypeOne=1;
const char EnumTypeTwo=2;
const char EnumTypeThree=4;
const char EnumTypeFour=8;
typedef char MyEnum;

Recommended Answers

All 20 Replies

The #define group would have the least amount of memory impact, but don't understand why you're using the typedef in the same area.
Speed is not (really) an issue, since they are evaluated at compile time.

#1 and #2 are, at run time, essentially the same except that enums (if I remember correctly are integers instead of characters). Also, they will be doing stores and comparisons with literal values (i.e. x=1,y>2, z<3) instead of referenced numbers.

#3 will use more memory at run-time since it allocates the const chars then references them (i.e. comparing by reference).
At the CPU level, this makes a world of difference. Since the CPU has to access the memory to get the number you're referencing, it will take more time to process it as opposed to you simply handing it the number.

Thanks, I tend to like using macros because that way they can fit into whatever type I need. I do not know the limit of enum values, but I have ended up with key values as high as 0x100000000000 and higher. Thanks.

#1 and #2 are, at run time...

???

I need to know which of these three copies of code is considered best.

const variables are preferred over macros because their names aren't lost during preprocessing. One of the biggest problems with macros is difficulty in debugging code where a macro is involved. Making const a compile time constant was intended to replace non-function-like macros.

When defining a set of related constants, enumerations are preferred because it makes your intentions clear and offers language support for treating the constants as a proper collection. If you use const variables or macros, the onus is completely on the programmer to make sure that the constants are used as intended.

I also need to which one is more efficient memory wise and which is more efficient time wise.

There's not going to be a significant enough difference for you to worry about it. Don't sweat the small stuff, data structure and algorithm choices make up the lion's share of performance and storage costs.

Now deceptikon and DeanMSands3 seem to disagree. DeanMSands3 said:

#3 will use more memory at run-time since it allocates the const chars then references them (i.e. comparing by reference).
At the CPU level, this makes a world of difference. Since the CPU has to access the memory to get the number you're referencing, it will take more time to process it as opposed to you simply handing it the number.

and deceptikon said:

const variables are preferred over macros because their names aren't lost during preprocessing. One of the biggest problems with macros is difficulty in debugging code where a macro is involved. Making const a compile time constant was intended to replace non-function-like macros.

...

There's not going to be a significant enough difference for you to worry about it. Don't sweat the small stuff, data structure and algorithm choices make up the lion's share of performance and storage costs.

The disagreement is minimal. Having done low-level code, I often think in low-level. But the user will likely never notice. Ever.
As deceptikon said:

There's not going to be a significant enough difference for you to worry about it.

One thing I neglected and he's right about is that, yes, using const chars are easier to debug than macros.

Another difference -- the value of const char cannot be changed but the definition of #define's can be changed. That makes const char a lot safer to use than #define's

I don't think we really disagree, we're just thinking on different levels. A big difference between two operations from the perspective of the CPU will nearly always be negligible from a user perspective unless it's performed a large number of times. The time it takes to load individual variables from memory into a register will be overwhelmed by the myriad steps performed in an algorithm. That's why it's best practice to focus on optimizing your algorithms' logic first before trying to micromanage things that will have far less impact.

Skip defines and macros. Use enums when you need type saftey and when it makes more sense.

I would recommend enum in this case because that's exactly what they're there for.

But wouldnt an enum be limited to the size of an int. I need to be able to store a massive set of independent values, 32 bits will not be enough.

I don’t know about the memory- and time-efficiency aspects, but using const is supposed to be best practice.

I suggest getting in the habit of favouring the compiler instead of the preprocessor, or getting the compiler to do as much of your work as possible.

As has been indicated already:

- if you use a const variable, and then somehow change the value of that variable later in the program, the compiler will complain (in other words, the compiler will catch your mistake and help you out)

- if you use const, and there is a problem with the variable during compilation, the compiler will let you know there is a problem with it by name. On the other hand, if you use a macro, the symbolic name may never be seen by the compiler and entered in the symbol table. If that variable causes an error during compilation, the error message might refer to 1, 2, 3, 4, etc. -- or whatever number is assigned to the symbolic name by the define statement. If you receive a message like “Error with 1”, tracking down that error is a lot harder than an error message like “Error with EnumTypeOne”.

Get the compiler working for you; not against you.

Here's my take on this. I mostly agree with most posters so far. Just adding my grain of salt.

Myth #1: " const variables are less efficient than #define s"
First, all const-variables are not the same. In this case, we are talking only about const-variables which could be (easily) replaced by a #define, so that limits things to POD-types (in the formal sense), meaning you won't have pesky little hidden mutables. Also, we have to make a distinction between static scope and an extern scope. A const-variable with static scope (which is the default, btw) will, by definition, always carry the value that it is given at the declaration-site. That's not true for extern const-variables because their values (or references to it) are resolved by the linker. So, in the case of a static const-variable, there is no reason why the compiler cannot replace the occurrences of the const-variable with its literal value, because that is well within the purview of the "as if" rule for code optimizations by the compiler, and, in fact, unless there are debugger-symbol requirements involved, most decent compilers will perform this optimization.

Second point is, as some have said already, the actual performance difference (if any) will be negligible compared to the macro-scoping issues like the logic of the algorithms, locality of reference, etc..


So, now that we know that efficiency-wise there won't be a noticeable difference between your 3 options, or at least, not enough to care about it, we are left with issues of good coding practices. Here is a recap of some obvious reasons why #define values are EVIL:

1) They don't respect name scoping rules of any kind. Name scoping rules are what determines in which context a name is visible. For #defines, that's all over everything, a massive pollutant for the code. Global variables or types are already a huge improvement on this issue. If you write a function, and type the name of a variable or type there, then the compiler looks within the scopes of the functions (curly-braces), then in the class-scope (if it is a member function), then in the containing namespace(s), and then the global scope. And it will choose the first match that it finds. This is really nice because if you happen to make a bad choice even for a global variable, it is most likely that if there is another variable elsewhere in the program with the same name it will be picked before your global variable is. The problem is that a #define overrides all of this, because it is something the pre-processor does (basic find-replace operation). So, if you are not careful with your choice of name for a #define, you could corrupt anything. Now, you can fix that by giving the #define some very special unique name like #define MY_LIBRARY_DEFS_VALUE_OF_PI 3.14159 , but that doesn't make the use of #defines all that appealing anymore, does it?

2) Obviously, if debugging is something you need to do or happen to do a lot, #defines could be a huge pain in the .. neck. You might end up having to memorize the actual values of all your #defined constants just to be able to recognize them during debugging.

3) Type-safety is an invaluable tool in C++, and so is const-correctness as one aspect of it. When using #defines, you are cutting yourself off from this really useful and reliable tool for writing safe and correct code. Once in a while, type-safety is a bit of an annoyance (you wish you could easily by-pass it, which you can, of course), but that doesn't even come close to comparing with all the benefits you get from it. Learn to work with type system, use types to enforce that the right parameters are given to the right functions, use them to do overloading (compile-time polymorphism, or double-dispatch mechanisms), use them to enforce mutability versus non-mutability (i.e. const-correctness), etc. Believe me, this is hugely beneficial for producing flexible yet robust software. Simple example, if you change the signature of a function by adding a parameter to it, if they are all integers but each of them have completely different meanings (like different kinds of flags), and some of the last parameters have default values, then you might be able to successfully recompile code that uses that function but gives it the wrong (old) parameters. If you use proper types (like enums in this case) to mark that semantic difference between the parameters, then the compiler will automatically catch those errors if you try to compile. That is vastly better than dealing with a program that behaves weirdly at run-time and digging to find where this erroneous function call is.


The above is also the main reason why, in most "professional" libraries, you will find that #defines are used, generally, for three purposes:
1) Include-guards #ifndef ... #define ... ... #endif .
2) Compilation options (or platform-specific conditional compilations).
3) Complicated MACROs (often a last resort to do things that are not otherwise possible with actual C++ code).
I can't recall seeing or using #defines in any other contexts in any good libraries or recent code of mine.


For your particular case, definitely, go with an enum type. That is exactly what they are for, for enumerations of options or flags. You can always convert enums into integers if you need to do any bit-masking or whatever with them.

Also, if you are worried about storage size or whatnot, note that C++11 has added to possibility to write typed-enums, that is, enums for which you can specify how the actual values are represented in memory (and int, long int, float, or even a class-type).

If you really have some massive set of independent values, and you actually want one bit for each, it might not be very efficient to represent them as integer variables (within an enum) where all bits except one is zero. You might as well just store the position of the non-zero bit, and use that later. If you use typed enums, you can even create a special class that deals with that.

But truly, if you really need an enum type to contain some crazy large amount of elements, that's probably an indication that it shouldn't be one enum type, but rather several different enum types.

Const-variables aren't a totally bad option either. But, in general, if you are representing an enumeration, use an enum type. Otherwise, a const-variable is fine, and please get used to working within a namespace such that you don't create global const-variables (this is also true for enum types).

I would say:
- Don't use the macro version
- Use the const int version if you plan to use the variables in arithmetic (your intentions are clear to the reader this way), I.e. if the numerical value of the thing is actually important to you.
- Use the enum version if having a human-readable label in the code is important to you, which I find is most of the time.

I don't believe there is any run time difference between the three methods since the compiler knows all of their values at compile-time. They potentially compile to the same thing?

Thank you, that was very helpful! I am going to go for enums (when possible) and otherwise create a simple container class if there are many things to represent.

But wouldnt an enum be limited to the size of an int. I need to be able to store a massive set of independent values, 32 bits will not be enough.

If it will fit in a 64-bit integer then you could use either const long long x = ??? or (Microsoft compilers) cost __int64 x = ???

Doesn't the new C++ standard allow you to define the size of your enum type?

check the following article nice summary

Yes, but not completly accurate. For example

  1. switch can work with enum or #define but not with const.

That's not true. This compiles perfectly

int main()
{
    const int x = 1;
    int m = 12;
    switch (m)
    {
    case x:
        break;
    default:
        break;
    }
}
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.