Is it incorrect to use the precompiler like a complicated copy-and-paste tool? I've been working on a system where a few objects are processed according to exactly the same 'pattern' so to speak; BUT they are not related objects. Although it would be possible to bring some of them back to a common base class; I've read alot recently about virtual functions having a negative effect on a programs execution speed. The nature of this application is that it needs to be extremely fast. So shaving nanoseconds is important to a degree.

So, instead of writing out the same pattern of code many times, I made sure all of the objects had the same function names/usage, wrote the 'similar' processing functions once in an external file, and use #include and some #defines to make the functions even more re-usable under different circumstances. I feel like it's a good use of the precompiler; that it's making my program faster, that I'm still typesafe (compiler will spit errors if the precompiler generates insane code), and that my code is still manageable (at least, more manageable than if I was actually copy-pasting).

This is only neccessary in one part of my program, where lots of different data objects pass through chains of connected manipulator objects, each on different processing lines. All the processing lines do essentially the same thing with the object they recieve, and put it back on the same line; UNTIL they hit certain objects, which need to know the line that the object is coming in on, and the type of object (which is ok because different objects go on different processing lines). There's six processing lines in pairs, so three object types. In most cases, a single manipulator's six functions can be brought to a single pattern, and that function is the one that's being included externally. (that description is a replacement for actual code; it's too much data to paste/attach)

What do you guys think? Would you use the precompiler like that, or use some other (general) method? Are virtuals really that bad? I'm having plans to limit their usage elsewhere... but that depends on benchmark results =P

Recommended Answers

All 12 Replies

As far as virtual functions are concerned, they definately incur some performance overhead, but in normal senarios, the cost and nightmares of maintaining repeatitive code is far more than that when virtual functions are used.

The most important point you have to consider while using virtuals is that the binding is done at runtime as compared to when the code is repeated, in which case the binding is compile time.

Also a very important thing which mars the performance of virtual functions is that they can't be inlined. So if you are repeatedly calling chunks of code with the virtual tag attached, you suffer a great performance hit.

Then again, it boils down to what kind of design you have used in your project, but still the cost of setting up vptr's and not having the ability to inline has to be taken in consideration.

>>The nature of this application is that it needs to be extremely fast. So shaving nanoseconds is important to a degree.
If speed is that important then don't use c++, but use C instead. And code in assembly when needed to optimize some time-consuming algorithms.

I don't think virtuals make a program slower, just a little larger. The way I understand them virtuals are little more than function pointers and calling a function through a function pointer is only a couple clock ticks slower than calling the same function directly.

>>Also a very important thing which mars the performance of virtual functions is that they can't be inlined
Although that is a true statement, large functions are never inlined anyway unless using a very stupid compiler.

> If speed is that important then don't use c++, but use C instead. And code in assembly when needed to optimize some time-consuming algorithms.

That could be a possibility at some point; this apps moved from Perl to C++ though; so.. I'm seeing a hell of a speed advantage already ^_- I think writing this heavily object-orientated aspect, in assembly, would probably take the rest of my lifetime...

[erk i've started using those > quotes now >_<]

Hm. I'm finding quite a few situations where the solution is either to make a function that does a process, then pass the results of the process to another function, then pass parts to another function, etc... I'm finding it hard to resist the urge to do one function with include; include, include instead of function, function, function. =P

I could end up making a big unintelligable puzzle of includes. I think I'd better desist unless REALLY neccessary..

Well. After a (very) quick think I'm going to bring those three objects I mentioned to a base class after all. I think; it will make my code alot easier for anyone but me to understand and work with; and I may only need one or two virtual functions in the end..

I will still need some precompile includes; because of the way those mutating/processing objects have always worked: in Perl, you don't have to pay much consideration to typesafety; (you can pass function names as strings!) and this has been an almost para-implementation of the Perl version.

s.o.s, Ancient Dragon; thank you for the advice.

I think writing this heavily object-orientated aspect, in assembly, would probably take the rest of my lifetime...

:eek: No, not the whole blasted program !

s.o.s, Ancient Dragon; thank you for the advice.

You are welcome. Oh and btw, best of luck with your project...

> No, not the whole blasted program !

Mwaha... nah, this is one aspect of the main program; this is the heavily OO aspect, and the part I would certainly least like to write in any non-OO language... There is a part that's just a linear input parser (by-character).. That might work well in C perhaps. But not today =P

> Oh and btw, best of luck with your project...

Thanks again ~s.o.s~

>> So shaving nanoseconds is important to a degree. Are virtuals really that bad?
Virtual functions will make a difference at nanosec level. But they improve maintainability many a times over.

>> UNTIL they hit certain objects.......type of object
You talked abt base class, and type of objects and certain type of objects, so I'm just a bit confused. Most common case in OOD is you have a function that is supposed to process different types of objects (derived class type), but work with only one type (base class type). If this is the situation it can't be solved using pp as you know the type only at runtime.
So I only hope you do have a situation where you're solving a problem using pp instead of virtual functions.

>>I'm having plans to limit their usage elsewhere... but that depends on benchmark result
Given that using pp instead of OOD to solve the problem will make things quite less maintainable, this is the only thing you can do, i.e. use OOD elsewhere.
Lastly in case you can already see impacts on this part of the code in future, I still suggest you reconsider and use virtual functions.

Your second post, I didn't get anything...

Sorry, that description was a bit ambiguously written.

What do you mean exactly by pp? (p)rocedural (p)rogramming? I find that term a bit vague, as all programming is laying out procedure, to a degree.

The type of the objects (class of the object) is always known at compile time.

Originally I had six functions working with three unrelated types of object, each of the three objects had deliberately identical function names and external usage. But each of the three objects did something different internally...

From now on (in what I'm writing here): I'm going to use 'internal' to relate to those three objects, and 'external' to relate to anything but those three objects =P

I was using include directives in each external object's functions to bring in blocks of code that performed the same set of operations (in terms of the source code) with a parameter that, in each case, was of a different type/class. I've stopped that, because on closer inspection, those classes can be related quite cleanly, with only a small use of virtual functions.

I'm still using include directives where the current external function has to call named external functions in another external object dependant only on the current function's name, and where that call is a result of alot of (identical) processing (often to find the next external object to pass the object under process into) that would have to be repeated otherwise. In Perl, that's easy to get around by simplifying the function into one function, and passing 'half' a function pointer as a parameter... In C++, it clearly can't be done like that.

By 'until they hit certain types of objects' I don't really know what I meant; it's more that an object (of a known type) will will meet a certain condition for an external object that is processing it; and in that case the external object might pass the object into a different function, or set its own state differently. Either way, there has to be three pairs of functions; because in certain externally processing objects, the function for a certain type of object can be very different, I do mean type there =P.

What I meant, I suppose, is that I can't make just one or two functions in each externally processing object that work with a base (or even a complete) class, without heavy use of enumeration/switch/etc. there's six processing functions, all very similar in some objects; which may pass their input object to the same named function in another (possibly different type of) processing object, and so on.

Quite hard to explain maybe. I may even go with making the six functions in each object six objects, and interconnecting those objects instead. Conceptually, this system's getting crazier by the day =P

The other case where I could perhaps reduce the use of virtual functions, is where I have potentially thousands of pointers to tiny (different) objects in an expression parser / variable storage/resolution system. The method I'm thinking would be a knock to memory usage though, so perhaps not.

Might be easier to explain with a code example; this is an include file that's definately staying:

#if defined CALLMODES_CUSP

  int directive = XFuse::STOIC;
    
  if(callmodes_fixed)
  {
    for(int i = 0; i < fixed_callmodes.size(); i++)
    {
      directive = directive | fixed_callmodes[i]->CALLMODES_CUSP(f,e);
    }
    return directive;
  }
  else if(aliased_modes != NULL)
  {
    std::vector<std::string> vct_call_modes;
    if( try_callmodes_again(vct_call_modes) )
    {
      for(int i = 0; i < vct_call_modes.size(); i++)
      {
        std::string this_mode = vct_call_modes[i];
        if(aliased_modes->count(this_mode) > 0)
        {
          
          directive = directive | ((*aliased_modes)[this_mode])->CALLMODES_CUSP(f,e);
        }
        else
        {
          cerr << "No modal child called "<< this_mode << endl;
        }
      }
    }
  }
  return directive;

#undef CALLMODES_CUSP

#else

#error! Define a CALLMODES_CUSP!

#endif

CALLMODES_CUSP, will always be the name of one of the six functions that an object that includes that file will have, and that all objects that are connected to that object will have. Parameter 'e' may still be of one of three types.

Here's how I'd use it:

int 
XFuse::xmlns::live::
Case::preTagInterface(XFuse::Fuser& f, XFuse::ElementDescriptor &e)
{
#define CALLMODES_CUSP preTagInterface
#include "inserts/cusped_callmodes.cpp"
}

int
XFuse::xmlns::live::
Case::postEmbedsInterface(XFuse::Fuser & f, XFuse::EmbedsDescriptor & e)
{
#define CALLMODES_CUSP postEmbedsInterface
#include "inserts/cusped_callmodes.cpp"
}

That was quite an early one, but I was starting to use that method for really trivial things, where virtual functions and class inheritance is probably a better solution... EmbedsDescriptor, EmbedDescriptor and ElementDescriptor (that's the three 'objects') are all now inherited from the base Descriptor; so I can write blocks of code that work with just Descriptor, and those preEmbeds, preEmbed, preTag, postTag, postEmbed, postEmbeds (that's the six 'functions') functions can either include a bit of code like that example, call something dependant on just Descriptor, or call something related to their type of descriptor, or a mixture of all three.

Ah... I need to get back to it. All this talk is making me development-hungry.

>>....are all now inherited from the base Descriptor....
Well in teh end I would say you made a good choice.. :)

pp = pre-processor
(same as what you refer as pre-compiler).
I called it pp because in C compiler that I used, the executable that used to perform pre-processing/compilation was named 'pp'.

Me>> ....If this is the situation it can't be solved using pp as you know the type only at runtime.
You>> The type of the objects (class of the object) is always known at compile time.
I was refereeing to something of this sort:
- assume classes Circle and Rectangle derive from class Shape.
- Shape has a pure virtual method called draw() This method is implemented by both derived classes as required.
- Now if there is a drawing that's wholly made of circles and rectangles, then it can be represented by a vector<Shape*> one_drawing_vec . (Needless to say it contains objects of type Circle* and Rectangle* (casted to Shape* )
- If there was a function that wishes to draw a full picture out of such a vector it would be something like this:

void draw_one_picture(vector<Shape*>& theShapesVec)
{
     for( int i = 0; i < theShapesVec.size(); i++ )
          theShapesVec[i]->draw(); 
}

This is the usual way, and in this case inside draw_one_picture() one does not know the 'real type' of theShapesVec, it could be Circle* or Rectangle* . Of course we don't even need to know it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.