I am finding that variadic templates are quite useful for typesafety of functions with variable length of argments, but I am having much less success using then with struct especially with typelists.

Is it possible to 1) count 2) extract types in/from a typelist using variadic templates.

I tried with GNU G++ something like this but it didn't work.

template <typename T, typename... Ts>
struct count{
    enum{value = 1 + count<Ts...>::value};
};
template<> count<T>{
    enum{value=1;}
};

Any suggestions?
Is it possible?

To vijayan121:

Wow! This is exactly what I needed!
I am quite privileged to have you respond to my post and immensely grateful.

While your are still here, I have another problem for which I would like you to share with me your wisdom.

I am switching from Windows/VC2009 to Linux/GNU for a number of irresistible reasons.
I (possibly naively) think I could dramatically improve my programming environment, if I could put my hand on a command-line tool that would perform exactly this:

1) List all the files in a directory that contains the functions definitions required by each function of a specific cpp file, outputing the files and functions.
2) List all the files in a directory that use each function definition of a specific cpp file, outputing the files and functions.

This would permit me to:
1) quickly detect circularity
2) optimise the re-use of files: tracking which function are rarely used, which could be combined in a more general function, etc.
3) rename a function and automatically adjust all the dependant names
4) create a tree to help me view the entire structure and help me get a grasp on the entire program in a new light
5) get rid of all headers and instead create quickly concatenated file on an adhoc basic for specific testing, specific platform/options.
6) ...

Now, it is likely (?) many programmers must be doing exactly this in many places and surely there must be a (non bloated) tool that just do exactly that efficiently and simply. The problem is that by being a totally green newbie in the Gnu environment I have absolutely no clue where to find it: is it already part of the gcc? is it a popular tool that must be downloaded separately?

You would save me considerable time if you could orient me.

(Of course it may well be that my supposedly smart approach is a common mistake of a newbie, in which case I would humbly study your wise words on this subject. However I am aware of the .h/.cpp model and never bought into it: instead I only use cpp file and I have a master file that include them all in a way (alas manually!)to avoid any circularity of duplication. By using pre-compiled headers (the headers referring to .cpp files) I have not experienced any problem of slow build).

However I am aware of the .h/.cpp model and never bought into it: instead I only use cpp file and I have a master file that include them all in a way (alas manually!)to avoid any circularity of duplication. By using pre-compiled headers (the headers referring to .cpp files) I have not experienced any problem of slow build).

I'm not sure that I clearly understand what you are trying to say. But I'm quite alarmed by what I think you are driving at.

The seperate compilation model is fundamental to any reasonably large C++ software development. And that is founded on the idea of header files specifying the interfaces. This is an old book, written even before the 1998 C++ standard - and its age shows at several places. But I would still consider it mandatory reading for anyone aspiring to be a serious C++ programmer:
http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620
See if you can get your hands on a copy; it would clarify a lot of things that you have asked about.

1) List all the files in a directory that contains the functions definitions required by each function of a specific cpp file, outputing the files and functions.
2) List all the files in a directory that use each function definition of a specific cpp file, outputing the files and functions.

Perhaps you could start by reading:
http://www.ibm.com/developerworks/linux/tutorials/l-gnutex/index.html

4) create a tree to help me view the entire structure and help me get a grasp on the entire program in a new light

Abother developerWorks article: http://www.ibm.com/developerworks/library/l-graphvis/

While you are at it, you may also want to have a look at: http://www.ibm.com/developerworks/library/l-gnuprof.html

In the normal case, I would have also suggested some g++ compiler options which can generate dependency information; but they are pretty much useless in the absence of header files.

To your original question:

Is it possible to 1) count 2) extract types in/from a typelist using variadic templates.

The easiest thing is to put the list of template arguments into a tuple type and then extract the count and elements. As so:

template <typename... Args>
struct info_on_types {
  static const std::size_t count = std::tuple_size< std::tuple< Args... > >::value;
  template <std::size_t I>
  using element = std::tuple_element< I, std::tuple< Args... > >::type; 
};

However, you can more easily get the number of arguments in the variadic template pack using the sizeof... operator. As so:

template <typename... Args>
struct info_on_types {
  static const std::size_t count = sizeof...(Args);
  //..
};

Very often, I must say, it is easier to deal with tuples instead of variadic templates. Except in cases of function with variable arguments or constructor-forwarding functions, it is easier to dump the variadic template pack into a tuple and then carry the tuple type around instead (under-the-hood). That's just a little tip, while I'm at it.

I tried with GNU G++ something like this but it didn't work.

This because you have a syntax error (enums are not integers). Try using a static const std::size_t as the type for the recursive "value" computation (instead of the enum), and it will work.

Now, to your second set of questions (in the future, please start a new thread when you have a completely new set of questions, trying to keep things organized):

1) List all the files in a directory that contains the functions definitions required by each function of a specific cpp file, outputing the files and functions.
2) List all the files in a directory that use each function definition of a specific cpp file, outputing the files and functions.

It is not super obvious what you are actually describing. However, it sounds an aweful like you are describing a call-graph. A call-graph is a tree-like structure (more generally, a graph) which organizes your code (functions or files) in terms of which functions call which functions and are called by which functions, resulting in a large spider-web that makes up a kind of picture of the flow/dependencies of your code. Usually, when talking about the same thing about files instead of functions, we call it a dependency-graph.

Most document-generating tools can be set to output such call/dependency graphs. In C++, the most popular document-generation tool is Doxygen. Document generation for code is an invaluable tool that any intermediate programmer must learn to use. This is really a neat system, and doxygen is particularly awesome. However, I generally never generate call-graphs or dependency-graphs because 1) they are too time/resource-consuming to produce for any non-trivial project and 2) they are generally to large and inter-twined that they are barely readable (and are often riddled with a large amount of calls to basic functions that don't really matter in the larger picture). So, this might not be as useful as you might think. But, play around with doxygen and see what you find useful and what not.

However I am aware of the .h/.cpp model and never bought into it: instead I only use cpp file and I have a master file that include them all in a way (alas manually!)to avoid any circularity of duplication. By using pre-compiled headers (the headers referring to .cpp files) I have not experienced any problem of slow build.

I can sympathize as this is what I used to do in my very first project when I was learning C++ (and was working on a pretty large and complex project, and I had plenty of experience in programming (Delphi and others), but not with the C++ compilation/linking model). I can tell you now that the earlier you learn to do this correctly the better it will be. C++ is really not meant to be done this way (include all in one master cpp file). Learn to build applications and static and shared libraries. Use headers and split the code intelligently between headers and cpp files. And so on. More on this to follow.

1) quickly detect circularity

That's really the job of the compiler, but most IDE code analyzers can do that too. In Linux, I recommend using KDevelop as an IDE for C++, it is really good, and the code analyzer is really clever and fast.

2) optimise the re-use of files: tracking which function are rarely used, which could be combined in a more general function, etc.

This is in part why you don't include all files into one massive cpp file. Function dependency analysis is really not to job of the compiler to worry about. It is the job of the linker and the build-system. If you compile everything as a single cpp file, you're not making much use of the linker at all, let alone build-system. Most compilers produce object files (.o or .obj files) for each cpp file. Usually, these object files contain an index listing all the function that they contain and all the functions that they use / need (in Linux, you can list that out with the nm command). Then, the linker analyzes those dependencies and resolves them, including only what is needed in the final output (executable). Many build-system will also either pick-up the dependencies found from the object files or analyze file-dependencies from source-code (include statements), and use that to figure out what needs to be recompiled when something has changed or what needs to be relinked, this way, it can avoid recompiling everything all the time.

3) rename a function and automatically adjust all the dependant names

That's the job of an IDE, most IDEs can do that. KDevelop certainly does (and I often use that feature, I like it). Some IDEs even include much more complex "refactoring" of code, the most advanced in that area is still Visual Studio. Nevertheless, this kind of thing is really in the realm of IDE features.

4) create a tree to help me view the entire structure and help me get a grasp on the entire program in a new light

This is either the job of the IDE or of the document generation tool. Many IDEs generate inheritance trees or similar things that organize functions in terms of namespaces / files / base-class or class, and so on. These are usually called "code inspectors". Microsoft calls it Intellisense to make it look like they invented it. Most code inspectors also grab doxygen (or other) tags to create pop-up descriptions of functions or classes in those trees. Code inspectors are limited and usually serve mostly to jump quickly between parts of the code, not so much to "get a grasp on the entire program". For that purpose, the document generators (doxygen) are more useful.

5) get rid of all headers and instead create quickly concatenated file on an adhoc basic for specific testing, specific platform/options.

Well, you won't find a tool that "get rid of all headers", except maybe this script: $ rm *.h ./*.h ./*/*.h ./*/*/*.h ./*/*/*/*.h. Headers are extremely useful and important, and an integral part of the source code tree as well as the library interfaces.

However, the latter part of your sentence describes something that is extremely useful, which is the ability to gain some insight into the target platform and as a consequence of that, set/unset compilation flags, include/exclude certain parts of the code, or even do some find-and-replace operations on some source files, etc. All this is the job of the build-system. My favorite one and one of the most popular cross-platform build-systems out-there, with most IDE support, is called CMake. You can do extremely fancy cross-platform build automations with cmake, really cool stuff. And it also has partner applications, CTest (for unit-testing) and CPack (for packaging / installation applications).

You will benefit greatly from learning to understand 1) how to properly assemble applications from C++ code (i.e., the separate compilation model, compiling object files, static/shared libraries, and linking into an executable), 2) how to use tools for generating code-documentation (e.g., doxygen), for helping you code (e.g., an IDE, like KDevelop, or Codeblocks, etc.), for helping you compile your code and automate a number of things related to your code (e.g., a tool like cmake), and for helping you stick to a rigorous unit-testing policy (e.g., CTest).

P.S.: The tools I mentionned are in no way the only ones, but they are pretty decent ones, and fairly popular, but there are other options too.

In truth I am quite a bit frustrated with the apparent (non-)availaibility or (impossibility to know of existence) of extremely basic C++ tools. You would think that after 30 years of using c++ someone would long ago have this solved, but apparently, no.

Let's review all this step by step.

To recall: I am not asking the moon: I am only asking something that 1) list the functions required by a file/function and something to 2) list all functions that are using my functions.

I went to the ibm site suggested by Vanjiyan121. It hasn't any of the 2 things I am looking for.
But it does describe tsort, a topological sort that indeed I had the intention to use on the output of the tools I am looking for. One of the tool on the site, Graphviz, looked promising but it requires that you already have done one of my task (the reconcilisation of the function called with their definition) in order to run it.

KDevelop is probably useable on my Triskel/Ubuntu platforms but since none permits to easily downloaded it, as a linux newbie I feel I am not competent enough to try to make it work (even though the site says it would work) on my non-kde (=gnome) platform. From rapidly reading the documentation, I don't know if Kdevelop can create what I want using command-line options but as far as I have read: no mention of it, so I am safely assuming the answer is no.

I read more of the gcc documentation and came across "protoize" which seems to be meant to perform exactly one of my need (in this case it create prototypes for all functions = lists all functions required by my program). Great! ... except it does not seem to work on c++ despite the mentionned option "-p g++" and the documentation does not mention it would not work, does not mention any bug about this, does not mention pretty much anything: you would thing that after years of development they could at least tell us more! But no. So I am still empty handed.
--DEPENDENCIES_OUTPUT: it requires the headers to start with, so in my case it would solve my problem only after I would already have solved it!

I did find a "makeheader" project on the web. It seems (?) to do (sort of) one of the 2 things I want. That's good news.

But still I really can't understand that a supposedly active open source community haven't figured this out years ago and have it integrated in bash. I mean, this appears to me to be so basic that I don't understand why people do so much (pretty much useless) bells and whistles stuff and don't (?) have any tools to do the bread and butter stuff.

And this brings me to the header/cpp point.

The reason I don't create headers for my cpp files is that I don't have a tool (perhaps makeheader will do ... I'll see but I am quite sceptical by now after all the time spent on this search) to do it automatically. And again I really can't believe that thousands of C++ programmers in the world daily do this manually while sitting in front of machines with the hardware to do this in a split second.

So I have sometime the feeling that the open source stuff might just be a sort of fake: that only pretty useless "toys" are offered (with all the important tools removed/kept secret): this leave every developper to write his own as a sort of "initiation" to the field.

So I am more and more inclined to do this: I have the official C++ grammar from the standard and I am very close to have made the decision to just write a parser for it, write my own tools (using the parser result), and just keep them secret like everyone seem to do!

Note to Vinjayan121: For Lakos: I did read him (and also all the books you had kindly listed for me a long time ago). I liked it ++ It was very instrumental in developping my view of how to deal with circularity. But even though I do like the interface/implementation model, this is different (in my humble newbie view) from being obligated to use a header/cpp model for the files (and masochistically rewrite headers ad nauseam all the time to sync the files - can programmers still be doing this in 2012 ?)

To recall: I am not asking the moon:

No, you are not asking for the moon, but you are a bit too picky, you won't find an exact tool that does exactly what you ask for. But I guarantee that there are lots of tools that will satisfy all your needs once you get to know them. Remember that programmers often tend to prefer command-line tools, so, it's not always going to be plug-and-play, all bells and whistles tools like you seem to expect.

1) list the functions required by a file/function

I told you already, that's called a call-graph, see this example.

2) list all functions that are using my functions.

This is called a caller-graph, see doxygen instructions for CALL_GRAPH and CALLER_GRAPH.

Doxygen seems to be the tool you are looking for. But there are also many other tools that can either generate call/caller graphs or use such graphs for other purposes. For example, profilers like gprof or callgrind report their profiling results in the form of call-graphs (with execution times required by each).

One of the tool on the site, Graphviz, looked promising but it requires that you already have done one of my task (the reconcilisation of the function called with their definition) in order to run it.

Doxygen uses graphviz, like many other document-generation tools. Learn to use it.

KDevelop is probably useable on my Triskel/Ubuntu platforms but since none permits to easily downloaded it, as a linux newbie I feel I am not competent enough to try to make it work (even though the site says it would work) on my non-kde (=gnome) platform. From rapidly reading the documentation, I don't know if Kdevelop can create what I want using command-line options but as far as I have read: no mention of it, so I am safely assuming the answer is no.

KDevelop is distributed on all major package repositories. And, of course, it is available for Ubuntu. Just open a terminal and enter: $ sudo apt-get install kdevelop

In any case, there are other options that are nearly as good, including QtCreator, Eclipse, CodeBlocks, Geany, and NetBeans, which are all easily available on most major Linux distros.

I read more of the gcc documentation and came across "protoize" which seems to be meant to perform exactly one of my need (in this case it create prototypes for all functions = lists all functions required by my program). Great! ... except it does not seem to work on c++ despite the mentionned option "-p g++" and the documentation does not mention it would not work, does not mention any bug about this, does not mention pretty much anything: you would thing that after years of development they could at least tell us more! But no. So I am still empty handed.

I am unfamiliar with protoize, but it seems like an arcane UNIX tool from the days of K&R vs ANSI C programming (we're talking about the 80s here). As far as I can see, this tool is not useful for anything else. I think you misunderstood what the tool actually does. It generates ANSI C compatible prototypes out of K&R C prototypes. This makes no sense in C++ (which is based off of ANSI C anyways).

--DEPENDENCIES_OUTPUT: it requires the headers to start with, so in my case it would solve my problem only after I would already have solved it!

What? Are you saying that you have a bunch of code without any headers? All your code is just a bunch of cpp files that you include into one big cpp file for compilation!?!? I'm sorry to say, but if that's the case, don't blaim the lack of tools for your troubles, you're the one doing things completely backwards. Don't blaim the hammer manufacturer for making hammers that are not good for driving screws.

I did find a "makeheader" project on the web. It seems (?) to do (sort of) one of the 2 things I want. That's good news.
But still I really can't understand that a supposedly active open source community haven't figured this out years ago and have it integrated in bash. I mean, this appears to me to be so basic that I don't understand why people do so much (pretty much useless) bells and whistles stuff and don't (?) have any tools to do the bread and butter stuff.

Are you talking about a tool to generate a header file with the prototypes of all the functions and classes defined in a source file? Your problem is about the fact that you skipped one hugely important and fundamental chapter in your learning of C++ (header vs. source files, and the separate compilation model), and now, you are looking for a tool to repair the damage. The problem is, this is a very unusual problem and few people would have programmed enough code for long enough in this wrong manner to actually need an automated tool to do the repairs, because if the code is less than about 50 thousands lines of code, you can easily do this manually, and it would probably be a good thing, i.e., as a punishment, so that you learn to do it correctly henceforth.

The reason I don't create headers for my cpp files is that I don't have a tool (perhaps makeheader will do ... I'll see but I am quite sceptical by now after all the time spent on this search) to do it automatically.

Are you trying to argue that the reason you can't do the work is because there is no tool that does the work automatically for you? Writing a header is as much a part of the process of programming as writing the cpp file (and headers are arguably the most important of the two types of files). And the things you put in the cpp file versus what you put in the header file are different, the header doesn't just contain the prototypes of the functions in the cpp file. Also, the purpose of the header file is to create an interface for a piece of code, while the purpose of the cpp file is to implement that interface. These are two different purposes that both require an intelligent programmer to do the work. Suggesting that header files should be generated automatically from cpp files is preposterous (like suggesting all source code should be automatically generated, thus making programmers completely useless). If you understood what writing a header file is all about, and why you split things between headers and cpp files, you would never consider one minute that this would make sense.

The only borderline crazy case is if you have a bunch of cpp files, and that somehow, the entire set of header files was destroyed in a very localized hard-drive fire that left all cpp files intact but destroyed all header files, and then you could consider trying to use the source files to try to recreate the headers (which could easily be nearly impossible). But no one would ever write a program or library from scratch without writing header files as part of the process, and you never gain access to cpp files without also having access to the corresponding header files (but not vice versa).

And again I really can't believe that thousands of C++ programmers in the world daily do this manually while sitting in front of machines with the hardware to do this in a split second.

Either your concept of what a header file is is grossly over-simplified, or you believe that artificial intelligence is already so developed that programmers are no longer needed since A.I. can replace them. Somehow, you seem to think a header file is just a list of function prototypes, far from it.

So I have sometime the feeling that the open source stuff might just be a sort of fake: that only pretty useless "toys" are offered (with all the important tools removed/kept secret): this leave every developper to write his own as a sort of "initiation" to the field.

Stop whinning. There are a huge amount of really awesome tools that are open-source. There is also a lot of crap, of course, like that useless toy program makeheader! You are just a kid who is pissed about not finding a quick-and-easy solution to his mistake (not having understood the headers vs. cpp files and separate compilation model correctly). Just bite the bullet and go back and read up on the subject.

So I am more and more inclined to do this: I have the official C++ grammar from the standard and I am very close to have made the decision to just write a parser for it, write my own tools (using the parser result), and just keep them secret like everyone seem to do!

C++ is notoriously difficult to parse correctly (often considered as the hardest grammar to parse of all the programming languages that exist), so, beware of that if you plan to write your own parser. Also, there are a few parsers available for C++, like the one from GNU (GCC's front-end for the C++ compiler), or a simpler one like doxygen uses. These are no secrets. You can download their sources easily. Of course, these are large code bases, given the difficulties and subtleties in parsing C++ code (whether it is for compilation or for simple call/dependency/inheritance graph and documentation generation). There are also plenty of simple syntax highlighters or background parsers that you can find the source code for. None of this is a secret, I don't know where you got that idea from.

Sure, there seems to be no active open-source project for generating header files from a source file, but that's because that task is a ridiculous one. It's like complaining that there are no good car-suspension systems available that can cope with the bumpy ride that you get when you put square wheels on your car, the problem is not with the manufacturers of suspension systems, it's with the guy who puts square wheels on his car!

KDevelop is distributed on all major package repositories. And, of course, it is available for  Ubuntu. Just open a terminal and enter: $ sudo apt-get install kdevelop

I did that. It worked. I pushed my luck even more by trying to install cmake with:

sudo apt-get install cmake 

and it seemed to work! (crossing my fingers...)

I tried to run the included hello world: it builded, but I couldn't run it because “execute launch” is grayed and I haven't figure what I am now suppose to do (apart from reading documentation which as this point is not my time priority) (at least with CodeBlocks everything worked immediately out of the box for a newbie without any help)

And this makes a perfect illustration of why I wanted to use a command-line environment in the first place: you spend too much time with an ide trying to figure out the ways to do anything and then it is typically no immediately obvious (or well documented way) how to do automate everything by designing a batch file that can access all the useful features of the ide. So after sometime an enormous learning curve, you realise you still can't do what you want (At least that was my experience with vc2009). But granted, here I am whining and maybe in a month after reading all the documentation and trying everything I may (or may not) be able to do what I thought I would be able to do using an IDE like Kdevelop (or CodeBlocks)
That was actually one of my “irresistible reason” to go to linux/gnu and command-line model: just build what I want easily and simply. (but I am not sure yet if it wasn't just sirens: we'll see in the next weeks)

Maybe Doxygen is for me: but it isn't clear from quickly looking at it. Does Doxygen simply build a nice looking graph (that will bankrupt my ink-cartdridge budget) using the data I am trying to generate in the first place (so that it would suppose my problem solved before using it), or will it also generate my lists of files to a plain-vanilla text files (from the command-line with appropriate options) and so that I can reuse the file output as I wish?

Thank you for the information on “protoize”.

“What? Are you saying that you have a bunch of code without any headers? All your code is just a bunch of cpp files that you include into one big cpp file for compilation!?!?”

Absolutely! And I am quite happy with it. Indeed I had been using the .h/.cpp style before, and I considered it pure hell (especially when you use templates a lot). Ironically, it was after reading Lakos that I switched to my new (much more productive) model and I don't care if I am an heretic at all especially when in my experience it is working so well (I consider it my “competitive advantage”: I have ZERO intention of going back to hell) (I even read from a prominent author - I can't remember the name right now (but I believe it was in one of the book suggested by Vanjiyan) – that my way is actually better because it permits to have the compiler to do whole file optimisation which it could not do otherwise with the bunch of object (old – obsolete? (juste teasing!) ) .h/.cpp file model.

“the purpose of the header file is to create an interface for a piece of code, while the purpose of the cpp file is to implement that interface. These are two different purposes that both require an intelligent programmer to do the work”

I think there might be a misunderstanding. So I'll give an example to clarify.
Let's say I am writing a rectangle class:

template <typename T>
class rect{
    T x1,y1,x2,y2;
    int get_width(){return x2-x1;}
    int get_height(){return y2-y1;}
};

In your model (?), now you have to retype the whole thing without the definition part! What an incredible waste of time. All right to paraphrase you, it may be ok to do that duplication for one or two file, but if your project is of any more normal size it will be like driving pulling a boulder behing or using square wheels (or both). (Unless of course you at least get a useful “toy” ( :-) ) like makeheader).
So while you are still sweating rewriting a second time most of the code, since (with my model) my job for that class is already completed I am already working on my next class! And, as a cherry on the sunday, my class will likely be even more optimised by the compiler than yours, for the previously mentionned reason! Repent and follow me to heaven... life is short!

I have now come to a personal conclusion.

1)The idea that there would be simple easily reusable tools out there easily combinable to produce useful stuff rapidly appears to be wishful thinking because:
    a) the time required to find  even the most simple and basic tool is simply prohibitive with no guaranty that they even exist!
    b) in most case it won't do what you want it to do, without considerable time reading endless documentation reading and/or time-consuming code adjustments 
2)The idea that open source could lead to “reusing” the source code of other projects does not also mesh with reality (for example using parsers from the source code of other IDE to write your own tool)
    a) the time required to understand the logic of the another software
    b) the time required to surgically excise the exact part needed
    c) the time required to integrate it to your project (given different architecture styles, etc)
6)is likely to be higher than just rewriting a similar functionnaly from scratch in your project with the difference that rewriting it from scratch leads to  additional benefits such as:
    a) you own the code
    b) you understand it perfectly
    c) you can debug it instantly
    d) you can modify it easily
    e) it is perfectly integrated to your other code
    f) you don't depend on anyone else

In other words:
a) a bazaar can be tiring and time-consuming and you may never find what you are looking for in it;
and, even more importantly:
b) nobody has ever succeeded in written a novel by reusing another writer's paragraphs.

Therefore, I will just write my own tools from scratch and stop wasting more of my time and the valuable time of kind fellow programmers.

Many thanks for the typelist answer, and many thanks to have  help me, in some atypical way, in finally making my decision -  even it may be different than what you probably expected.

Here is a basic outline of how the vast majority of programmers work:

  1. Get an idea for a project, or be given a problem to solve from your boss or superior.
  2. Sit down alone or with your team to brainstorm / discuss / think about the best way to solve the problem.
  3. Draw diagrams to organize the specific parts will make up the code, understand the dependencies, split the responsibilities and sub-problems among the different parts of the code.
  4. Design the interfaces for the code, either alone or as a team, defining in a strict manner how each functions and classes will be used and what effects they will have (called post- and pre-conditions).
  5. Write header files which put these designed interfaces into concrete code in the form of class declarations, function declarations, and inclusion patterns.
  6. Split the implementation work among co-workers (or, on a lone project, split in terms of time (when you will do what part)).
  7. Work on implementing the interfaces you were given to implement, this goes into cpp files.
  8. Write a complete suite of unit-test programs to exercise and verify that all the functions implemented behave as required by the design specifications laid out in step 4. If tests do not pass, go back to step 7.
  9. Run a suite of integration tests to make sure that all the parts (written by you and your co-workers) interact as expected, if not, go back to either step 3 or 4.
  10. Deploy the product.
  11. ...

You see, by far the most important parts are the first five steps, because they determine how everything else will go. Often, steps 4 and 5 are doing concurrently, and are generally considered to be the most important step and usually the one that takes the longest (at least, for experienced programmers). A typical inexperienced programmer will skip steps 2, 3 and 4, and even, in your case, skip step 5. In other words, they jump directly to blindly writing implementation code. And they usually skip steps 8 and 9 too. And then, they spend an inordinate amount of time solving bugs and patching up a broken design in one big nightmarish experience that leaves them scolded. If it was a non-trivial project, they usually won't do that same mistake twice, and learn to do things correctly the next time around.

A typical experienced programmer, on the other hand, will spend the most amount of time (75% or so) going through the first 5 steps. In other words, making sure the idea is good, the solution is sound, the design is solid and flexible, and interfaces are well-defined and properly separated into their sub-problems or sub-domains. At that point, when the headers are mostly written, and the design is mostly defined, there is yet no (or very little) actual implementation code written. Then, an experienced programmer writes the code in the blink of an eye (about 5% of the time spent). Finally, he spends a good amount of time making sure all the tests are successful and that all design specifications are met. And then, there are rarely reasons to go back to the drawing table or to solve any difficult bug (there are always little typos and mistakes, but rarely anything serious). Of course, for small daily programming problems, this entire process can happen in a day, but for larger projects, this can be a very formal process that stretches out over several months or a year. In any case, this is the modus operandi of virtually all experienced programmers, because this is a very effective way to do the job well and in good time, and with minimal pain.

So, you see, the time spent writing (typing) the content of the header files is negligibly small, and it is usually a natural way to carry out the most important step in the design process (specifying interfaces) or to start working on a piece of code. And this certainly cannot be automated.

Writing the header file is almost always done before writing any corresponding cpp file. There simply isn't a need to generate the header afterwards. And, personally, if I ever do a copy between the header and the cpp, it will be to copy the content of the header into the cpp just to quickly copy all the function prototypes at once, and then create their implementations. There is so little overlap between what goes into a header and what goes into a cpp that I don't understand what your objections are to this model.

So while you are still sweating rewriting a second time most of the code, since (with my model) my job for that class is already completed I am already working on my next class!

Programming is not a race of LOCs per minute. The "extra time" that I spend writing my header file carefully as I am carefully designing my software is well paid off in the extremely small time that I have to spend debugging anything. In the last 3-4 years, I have almost lost most of my skills as a debugger (finding memory leaks, inefficiencies, memory corruptions, etc.) because I have not done any serious debugging since, while the first few years that I was essentially using your method were a nightmare of debugging and other issues. Of course, it is not just because of that, I'm also a much better programmer now (technically speaking), but programming in a careful and organized fashion went a hell of a long way to get me there.

And if you ever going to work with anybody else, you are going to have to learn this anyways, and your "special" technique is not going to be an "edge", at least, not unless you mean an edge with which to cut yourself.

And, as a cherry on the sunday, my class will likely be even more optimised by the compiler than yours, for the previously mentionned reason!

Doubtful at best. Maybe your literature is just obsolete. Technically speaking, if all is compiled as one cpp file, then the compiler has more opportunity for optimizing. First of all, to do this correctly, you shouldn't use a "all-cpp" strategy, you should use an "all-headers" strategy, which is commonly referred to as a header-only library (like large chunks of the C++ standard library). And when you are doing heavily templated code, this is pretty much the only choice you have. Second, the optimizations that can be done by the linker (when assembling object files) is not nearly as bad as it used to be in prehistoric times. Given a project, taking all the cpp files and including them into one main cpp file and compiling that, versus compiling them separately and assembling the object files together into the application, this won't make any kind of noticeable difference unless you are a really bad programmer that can't use inlining when it is proper. Third, you pay heavy prices for this "style", which is often cited as the biggest drawback of using templates a lot. Your overall compilation times get substantially increased (one of the things I hate the most), you cannot effectively do incremental builds (compiling only what has changed), any change triggers a global re-build, it is difficult to keep consistent interfaces for deployment of your library, you cannot implement compilation firewalls (which are really important when using dubious external dependencies), you cannot deploy run-time components for your library, and many more issues like that. When you consider that the number one problem that always comes up about templates and has caused a large number of people on the C++ standard committee to spend inordinate amounts of time addressing the issue that you cannot put template code in cpp files in the traditional form that you would for non-templated code, you must acknowledge that there are real important reasons why this is desirable.

Finally, read almost any decent source about coding practices and you will hear an advice more or less along the lines of: "Clarity and correctness come first, performance concerns come after" or "Don't optimize prematurely". At least, those are words often uttered by Herb Sutter, Andrei Alexandrescu, Bjarne Stroustrup, Scott Meyers, and many others, and those are essentially God-like figures in this business.

I think there might be a misunderstanding. So I'll give an example to clarify.
Let's say I am writing a rectangle class:
[.. snip ..]
In your model (?), now you have to retype the whole thing without the definition part!

If I am designing a class for representing a rectangle, I would first think about what I wanted to class for. What is its purpose? What functionality should it have? Where will it be used? And how? Then, that would allow me to decide whether the class needs to be more of a POD-type (just store the info for a rectangle) or more of an instantiation of an abstract concept (like one kind of shape) with a specific functional purpose (e.g., to be drawn, to be collided with other shapes, to be saved into standard 2D/3D geometry file formats, etc.). If its the former, I would just write this in a header file and be done with it:

#ifndef MY_LIB_RECTANGLE_HPP
#define MY_LIB_RECTANGLE_HPP

namespace my_lib {

  /**
   * This class template is a POD-type used to store the information needed 
   * to represent a rectangle.
   * \tparam T The value-type of the components of the rectangle.
   */
  template <typename T>
  struct rectangle {
    /** This member holds the lower corner vector of the rectangle. */
    vect<T,2> lowerCorner;
    /** This member holds the upper corner vector of the rectangle. */
    vect<T,2> upperCorner;

    /**
     * Default and parametrized constructor.
     * \param aLowerCorner The lower corner vector of the rectangle (default: origin).
     * \param aUpperCorner The upper corner vector of the rectangle (default: origin).
     */
    rectangle(const vect<T,2>& aLowerCorner = vect<T,2>(0,0),
              const vect<T,2>& aUpperCorner = vect<T,2>(0,0)) :
              lowerCorner(aLowerCorner), upperCorner(aUpperCorner) { };
  };

};

#endif

If it is the latter case (a more complicated use-case for the class), then I would start by writing a header file as so:

#ifndef MY_LIB_RECTANGLE_HPP
#define MY_LIB_RECTANGLE_HPP

namespace my_lib {

  /**
   * This class template represents and allows the manipulation of a 
   * rectangle as one of many different kinds of shapes.
   * \tparam T The value-type of the components of the rectangle.
   */
  template <typename T>
  class rectangle : public shape<T> {
    protected:
      vect<T,2> lowerCorner;  // the lower corner.
      vect<T,2> upperCorner;  // the upper corner.

    public:

      /**
       * Default and parametrized constructor.
       * \param aLowerCorner The lower corner vector of the rectangle (default: origin).
       * \param aUpperCorner The upper corner vector of the rectangle (default: origin).
       */
      rectangle(const vect<T,2>& aLowerCorner = vect<T,2>(0,0),
                const vect<T,2>& aUpperCorner = vect<T,2>(0,0)) :
                lowerCorner(aLowerCorner), upperCorner(aUpperCorner) { };

      /**
       * This function translates the rectangle by a given translation vector.
       * \param aDisplacement The displacement vector to apply.
       * \post The dimensions of the rectangle are unchanged, but its center is 
       *       moved by aDisplacement.
       */
      void translate(const vect<T,2>& aDisplacement);

      /**
       * This function returns the center position of the rectangle.
       * \note The center position is calculated from internal states, 
       *       it is not cached within the object.
       * \pre This function is well-defined for any state of the object.
       * \post Does not mutate the object.
       * \return The center position of the rectangle.
       */
       vect<T,2> getCenter() const;

       //... so on..
  };

};

#endif

And when I'm satisfied with the interface that I have setup, I implement the functions, either within the class declaration (for simple one-liners), within a cpp-file (for non-templated code), or further down in the header file (if it is a non-trivial function template (or a member of a class template)). I much prefer it when I can put it into a cpp file, but I also love to use templates, so it's a tough decision. When I am done implementing the class I will write and test a unit-test program more or less like this:

#include "my_lib_rectangle.hpp"
#include <iostream>
#include <limits>

template <typename T>
bool test_rectangle() {

  my_lib::rectangle<T> r1 = my_lib::rectangle<T>( my_lib::vect<T,2>(-1.0,-1.0), 
                                                  my_lib::vect<T,2>(1.0,1.0));

  if( norm(r1.getCenter()) > std::numeric_limits<T>::epsilon() ) {
    std::cerr << "my_lib::rectangle class template failed center-test for type '"
              << typeid(T).name() << "'!" << std::endl;
    return false;
  };

  // .. so on.. for all functions that I implemented.

};


int main() {

  if( test_rectangle<int>() &&
      test_rectangle<double>() &&
      test_rectangle<float>() &&
      test_rectangle<long double>() &&
      test_rectangle<long long int>() )
    return 0; // success!
  else
    return 1; // failure!
};

Of course, the example you gave (the rectangle class) makes all this look really tedious, because the class itself is such an incredibly trivial example. But, in real life, this careful method, with good interface designs and specifications, and comprehensive unit-tests, is a hugely useful and beneficial method. All professional programmers that I have ever encountered follow this method and appreciate it greatly, but, of course, there is some variation in the rigour used in all this (my example above is somewhere in the middle, sort of what you would expect from either a typical open-source project or common practices in small software companies, but in other industries (manufacturing, server-side programming, space projects, etc.) they are a million times more rigorous and careful than that (the time you actually spend writing the code is insignificant)).

To Mike:

Thanks for this very interesting exposition (opening a view into a world that I do not have access) containing ideas I need to take some time to digest.

Nevertheless, and forgive me about this, I remain quite skeptical.

If this “careful method, with good interface designs and specifications, and comprehensive unit-tests, is a hugely useful and beneficial method” and if ¨there are rarely reasons to go back to the drawing table or to solve any difficult bug (there are always little typos and mistakes, but rarely anything serious)¨ how is it then that the biggest companies in the field (Microsoft, but I could also use other examples) isn't able to deliver on time its products, and even without such huge and significants bugs that it has, in the past, embarassingly crashed flagship presentation of their “new super-stable bug-free most-tested ever product in the world” ?

More recently in a previous discussion with Vanjiyan we came up with a bug in VC2009. I submitted the bug to Microsoft, they confirmed it. Surely, with the state-of-the-art techniques and seemingly perfect coding architecture you so eloquently described it should have been a breeze for a 73 billions annual revenue company to fix it giving that VC2009 is supposedly a flagship product? Wouldn't even a summer intern with such powerful tools you described been even able to suggest the fix himself? Well guess what: they respond it would be too difficult for them to fix and they would not fix it! So ciao, for that nice theory of easily repairable well-designed software!

I could also point this: how is it possible that the same 73 billions annual revenue company CAN'T produce a fully conforming C++ ! This is not at all an unreasonable question. In my view, it points that the current method of software engineering are significantly inadequate.

So maybe one has to distinguish what looks good academically and theoretically and what in practice works best. Clearly I am not in a position to judge about this, but I can see clearly (as the general public also has) that the method you described has utterly failed to deliver product on time without serious bugs even for the largest and biggest companies that are able to affort the most prized programmers in the world.

So maybe simplicity might be reconsidered.

In my case, for the very small projects I do, the approach you are suggesting would likely be dramatically overkill, but I will nevertheless give it some more thoughts. I doubt I will change my ways because I adhere to the philosophy: “don't fix what ain't broken” and simplicity has always work for me beautifully in my life.

"When you consider that the number one problem that always comes up about templates and has caused a large number of people on the C++ standard committee to spend inordinate amounts of time addressing the issue that you cannot put template code in cpp files in the traditional form that you would for non-templated code, you must acknowledge that there are real important reasons why this is desirable."

This was actually one of the main reason why I changed from the .h/.cpp to my current model. Now I can use (immensely useful) templates, quickly easily without any hassles. My code will soon be more LISP in C++ than C++ if the trend continues..

If this “careful method, with good interface designs and specifications, and comprehensive unit-tests, is a hugely useful and beneficial method” and if ¨there are rarely reasons to go back to the drawing table or to solve any difficult bug (there are always little typos and mistakes, but rarely anything serious)¨ how is it then that the biggest companies in the field (Microsoft, but I could also use other examples) isn't able to deliver on time its products, and even without such huge and significants bugs that it has, in the past, embarassingly crashed flagship presentation of their “new super-stable bug-free most-tested ever product in the world” ?

More recently in a previous discussion with Vanjiyan we came up with a bug in VC2009. I submitted the bug to Microsoft, they confirmed it. Surely, with the state-of-the-art techniques and seemingly perfect coding architecture you so eloquently described it should have been a breeze for a 73 billions annual revenue company to fix it giving that VC2009 is supposedly a flagship product? Wouldn't even a summer intern with such powerful tools you described been even able to suggest the fix himself? Well guess what: they respond it would be too difficult for them to fix and they would not fix it! So ciao, for that nice theory of easily repairable well-designed software!

Microsoft's coding practices are shameful, and have always been. They are primarily a marketing firm. If you read their internal communications, management has made it very clear that they do not care one bit about the quality of their products or the innovations therein. Their strategy is almost solely one of marketing and market control, and it has been since they realized they couldn't compete on technical grounds. They initially (around the times of windows 3.0) beat all their competitors by doing what nobody else had willfully done before: release an unfinished product. Because they did so ahead of their competitors and doing a ton of marketing efforts, they were able to capture the market, and they have controlled it ever since. Microsoft is a very interesting case-study of entrepreneurship, but it is in no way an example of good programming practices or software development. Microsoft is a marketing machine, software products are secondary for them (and, in any case, consumer products are notoriously low in quality, across the board, because the quality is only a secondary concern in that market).

And btw, I would say almost the same of Apple. Although they have a slightly better track record with software quality, but still far from being an example.

For large entities that are more of an example for coding practices, you would be better off looking towards Oracle, Novell, Borland (R.I.P.), C.E.R.N., Bell Labs., etc.

I could also point this: how is it possible that the same 73 billions annual revenue company CAN'T produce a fully conforming C++ ! This is not at all an unreasonable question.

Microsoft does not care at all about C++, and never has. Their compilers have always been shameful. Lately (past couple of years), they are making modest efforts to catch up to those who do care about C++ (Intel, GNU, and a few high-profile C++ users like Facebook and Google), but, of course, in terms of compilers, everyone is ahead of Microsoft. The 2008 compiler was the first usable one, the 2010 version is OK. That's a pathetic track-record (a 10+ year delay in standard-compliance).

In my view, it points that the current method of software engineering are significantly inadequate.

Again, I don't think there is any method to the madness going on at Microsoft. Your example is a really bad one, I know of no one who would take MS as an example of a company using current methods of software engineering.

For a stellar example of open-source coding, take the example of Boost. For stellar examples of rigorous software design, look towards CERN, satellite systems, manufacturing software, and certain server-side programming (Oracle, Cisco, Novell, etc.).

So maybe one has to distinguish what looks good academically and theoretically and what in practice works best. Clearly I am not in a position to judge about this, but I can see clearly (as the general public also has) that the method you described has utterly failed to deliver product on time without serious bugs even for the largest and biggest companies that are able to affort the most prized programmers in the world.

In the places where people care about software quality, people follow the kind of method I described, and do so rigorously. This is common practice and was a result of practical lessons learned in the trade. However, don't be fooled into thinking that consumer products are any kind of an example of good software, almost none of it comes even close to minimum quality assurance standards. Nevertheless, most programmers, including those doing consumer products, are aware of the ideal methods to produce really good software in good time, however, managers in consumer software product companies generally don't care about those things, and they are rarely followed in those areas. And consumer product software firms do not hire the "most prized programmers", these are mostly hired by large telecommunication, server, database or scientific / engineering / manufacturing software firms.

So maybe simplicity might be reconsidered.

There is an actual name for your approach, it is called "cowboy coding". It is not always bad. I sometimes do this when I just want to quickly get a trivial piece of code together (e.g., throw together some code to control a R2D2-style robot around a warehouse, that's the last time I did that). But, more often than not, I end up regretting it. You quickly get from "the blank page" to "the running program", but that's only where the trouble begins (bugs, extensions to the code become really hard to do, lots of patchwork, etc.).

In my case, for the very small projects I do, the approach you are suggesting would likely be dramatically overkill, but I will nevertheless give it some more thoughts. I doubt I will change my ways because I adhere to the philosophy: “don't fix what ain't broken” and simplicity has always work for me beautifully in my life.

Sure, if what you do works for you, use it. But I suggest you consider it to some extent, or borrow some ideas from these types of "professional" methods and incorporate it into your methods, see if you like it. I also work on reasonably small projects (about 100 thousand lines of code right now, most of my past projects were in the 30-80 thousand LOCs range). One thing that you should seriously consider is using unit-tests, they don't have to be super-rigorous, if you just spend some time to produce a test program for all the main functions in a new piece of code you wrote, you will quickly find great benefits to that (mostly, the nice thing is that when you know that you have tested, incrementally, in most meaningful ways, all the important individual functions in your library, it is really easy to find problems or bugs because there are only very few places they could be). Every bug you catch with a unit-test could mean several hours spared in looking for that bug later when your overall program crashes or produces weird results, and so, finding bugs that way is a really great feeling, and a time-saver, even for small projects. The second "professional" technique worth adopting (even for small projects) is discipline in commenting the code, and especially the interfaces (as I did in my earlier post, with doxygen tags), this is also time well invested (as time spared in a few months later not having to discypher what you did a few months earlier, that you forgot about).

“One thing that you should seriously consider is using unit-tests”

But I already do that! And its extremely easy to do with my method! Probably dramatically more so than any other way. I simply add a
#include “filename.test.cpp”
to my single everything.cpp file just after
#include “filename.cpp”
and I am instantly productive.
Even better I can change the entire configuration/implementations by just changing a few template parameters!
No messing with file networks of headers at all.

However, since now I am trying to move from vc2009 to g++, it is possible (aarrgh!) that I may have no other choice than to use at least some of your ideas (at least until I create and get my tools up and going), so you might continue your insidious propaganda at the same time if you wish! Who knows you might even wear me out to a new nirvaña!

My idea of switching to GNU and batch file was that I could have done everything I am now doing right now even more automatically by launching a single batch file that could have build many test files and projects files all at once by reconcatening my .cpp file in intelligent appropriate ways. Clearly after this discussion thread, I see that GNU is not anywhere near ready as a platform to do it smoothly and easily, as I erroneously thought it was, and I will need possibly weeks of tedious work to bring it to what I consider a more useful plaform for my needs.

But, perhaps there is another almost equivalent way to do this in GNU right now? How? I have never used “makefiles” “make” “cmake” “automake” etc and there is no obvious guide to help me in the gnu books I recently obtained. I don't know which of the file/tools are vestiges from the 80 and obsolete, which don't work, which are too buggy to bother and which are actually still in use, useful and actually working correctly.

And after witnessing the seemingly indifference of some programmers to typing twice the same thing... (I won't be given any names, promise), maybe it is even much much worse than I thought and I have to manually create dependencies relations between my file one by one (aaargh!) (who knows how much hardship some people may be ready to endure after it has pilled up insidiously over time)!

If you could just point me rapidly to a good tutorial that precisely and succintly address this subject I would be grateful especially if the whole state of this platform is not even much more depressing (as it already is pretty much in my mind) as I terribly fear it might be.

But I already do that! And its extremely easy to do with my method! Probably dramatically more so than any other way. I simply add a
#include “filename.test.cpp”
to my single everything.cpp file just after
#include “filename.cpp”
and I am instantly productive.
Even better I can change the entire configuration/implementations by just changing a few > template parameters!

Ishh... You actually have to modify code (by adding an include statement or changing a few template parameters) to add the unit-tests... that's very ugly, sorry. There are two common ways to setup unit-tests:

1) You put the "main()" function for the unit-test at the end of the cpp file containing the functions that are tested by the unit-test program, but you wrap that main() function with a compilation flag. As so:

// in a cpp-file:

#include "my_functions.h"

void foo() {
  //...
};

//... other function implementations.

#ifdef MY_LIB_COMPILING_UNIT_TESTS

int main() {
  //... all the unit test code.
  //... usually returns 0 for success and an error code otherwise.
};

#endif

2) You create a separate cpp-file for each unit-test and link it to your library.

The advantage of either of these methods is that you have no intrusion at all on your library's code itself. In order words, without changing anything in the code, you can seeminglessly compile all your code into static or shared libraries (and/or a set of executable applications) without leaving a trace of the unit-tests, and you can easily compile and run all the unit-tests in one batch. If you use cmake, the former will be done with one command $ make and the latter (compile and run all unit-tests) will be done with one command $ make test, without any other work.

But, perhaps there is another almost equivalent way to do this in GNU right now? How? I have never used “makefiles” “make” “cmake” “automake” etc and there is no obvious guide to help me in the gnu books I recently obtained. I don't know which of the file/tools are vestiges from the 80 and obsolete, which don't work, which are too buggy to bother and which are actually still in use, useful and actually working correctly.

GNU/Bash is certainly a powerful scripting environment and it is worth learning a few tricks with it. However, for build automation, it is a bit limited, and it lacks the cross-platform qualities of some of the more popular build-systems. Just to clarify on those:

  • "makefiles" and "make": "make" is the command to execute the "make" tool which reads a file called "makefile" in the current directory which is a kind of configuration-file / build-instructions for a number of compilation targets. This is a very old system (but so is Bash), but it works very well. The problem is that writing the makefiles manually is really masochistic, because its syntax and organization is horrible (for a human being). However, it is so powerful and widely supported that virtually all computers or micro-controllers that have a compiler on them also have the make tool installed, making it a build-system of choice for portability, and it makes it a timeless tool. Also, many of the build-systems simply generate makefiles (this is the case for cmake, qmake, autoconf / automake, bjam, Visual Studio's "project files", etc.), and most people prefer interacting with one of those build-systems instead of makefiles directly.
  • "cmake": this is not very old and is an extremely powerful build-system and a good cross-platform scripting tool, which makes it a very popular choice for build configurations. Also, many open-source libraries use it or support it actively. I definitely recommend it very strongly. It's syntax is easy and simple, child's play.
  • "automake / autoconf": this was a pretty terrible attempt at making the process of writing makefiles a bit easier. I don't think that anyone who used autoconf actually appreciated the experience. Again, a masochistic tool, IMHO.

The more current and popular build-systems (alongside cmake) are qmake (more geared towards the Qt toolset) and bjam (used and developed by Boost). qmake is virtually identical to cmake but with more Qt-specific commands (but Qt also provides the equivalent commands as a cmake module, if you want to use cmake).

And after witnessing the seemingly indifference of some programmers to typing twice the same thing...

1) I repeat: THE HEADER FILE IS NOT A REPETITION OF THE SOURCE FILE! Not even close to it.
2) I don't type things twice, I know a very useful tool to avoid that: CTRL-A, CTRL-C, CTRL-V

maybe it is even much much worse than I thought and I have to manually create dependencies relations between my file one by one (aaargh!)

Programming without taking into account the dependencies that you are creating between the components of your software/library is wrong, period. Interdependencies are one of the most critical aspects of any good software design. One of the prime criteria I use to judge a good versus bad software design is the amount of interdependencies. But the only way you can do a good job on that front is by having serious reflections about it before you write the code, and it helps to use the classic header/cpp model (or header-only template library model) to make those interdependencies clear and obvious. If you look at a coding practices book like Sutter and Alexandrescu's "C++ Coding Standards", you will find that there are at least a dozen or two guidelines that you could file under "minimize interdependencies".

The fact that you have this problem of not being able to sort out your interdependencies is only a testament to your failure to handle this very critical aspect of the design process. The fact that you have this problem after already having a lot of code is just baffling to me. You are correct about one thing, this is a very unusual problem, at least, this far into a project.

If you could just point me rapidly to a good tutorial that precisely and succintly address this subject I would be grateful especially if the whole state of this platform is not even much more depressing (as it already is pretty much in my mind) as I terribly fear it might be.

Go through some cmake tutorials. They are fairly short because cmake is so quick to learn.

I have also been considering writing a proper tutorial on the overall header / cpp / object file / static-lib / shared-lib / executables and how to work with a build-system, compiler and linker. Keep tuned for that. I don't know of any good one.

As for doxygen, this really doesn't require a tutorial, just use the wizard, it's child's play. And you can consult the list of tags.

If you are not already using a version control system, do so, I recommend Git, but svn is OK too. Tutorials are easy to find.

Then, overall, I highly recommend that you read the "C++ Coding Standards" book by Sutter and Alexandrescu. If that book can't convince you of at least some of the things I've been advocating here, nothing will.

Thanks Mike, I do appreciate your time and help (… and encouragement!).

Applying again what has always worked very well for me, I decided to simplify everything as much as I could and focused on my specific task at hand: moving from vc2009 to gnu and replicate at least what I can do now with vc2009.

I did a simple preliminary test and have come to realise that actually I didn't need at all any type of make commands / files or any dependency table whatsoever, and could just continue to work in my succesful method immediately by using g++ to compile/link! That's fabulous!

I used the very straighforward Geany software (included in my linux distribution) to provide me with the instant ready-to-use and correct option I needed for my command-link compiling/linking. It worked beautifully, immediately.
So this first job is done. Yahoo!

Now for the second task: I have to create my tools and not distract myself and waste any time with learning make or other stuff that I don't need to do what I want to do. I may even forego the creation of the tool (mentionned earlier) and simply continue to program in the same way I always have done previously on vc2009 with the now considerable improvement that I have now access to a free linux os and a free gnu c++ compiler that is up-to-date with the C11 standard and has all its most useful new features! My day is ending very well indeed (after quite a few depressing moments earlier... as you may have suspected!)

I will keep all what you said somewhere in my head and it may be helpful one day especially if I ever face programming organisation difficulties (which hasn't been the case at all until now, quite the contrary), but at this point I will have more than enough on my hand learning the linux programming interface and gtk+ to keep me more than busy!

I truly appreciate your contributions and those of the others (Vijayan121 in particular). Thanks a lot to all of you.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.