Understanding C++ - From source to binaries

mike_2000_17 15 Tallied Votes 8K Views Share

Introduction

A recurring problem many newcomers face when tackling C++ is the compilation process, from sources to binaries. This tutorial will detail that process. It will be especially useful for people only familiar with higher-level languages (if any), but veterans of one or more native languages (C, Fortran, C++, D, etc.) might still learn a thing or two. And to that point, I would also mention that conceptually, most of this tutorial applies to any native, compiled language.

For widest applicability, this tutorial will feature GNU tools (GCC) used in a Unix-like environment (POSIX). For Mac OSX, the preferred compiler is Clang++, which involves a tool-suite compatible with GNU tools. Similarly, under Windows, the MinGW tools (incl. GCC) are also mostly the same, with some special issues. The only drastically different environment is the Microsoft Visual C++ compiler (MSVC). The oddities of the MSVC stack is beyond the scope of this tutorial.

Glossary

Before we start, we have to explain a few of the terms we will encounter:

Header file: Contains C++ source code and they are mostly used to make declarations, i.e., they contain the stubs that tell the compiler about the "things" that exists (somewhere). Usual extensions: .hpp, .h, or .hxx.

Source file: Contains C++ source code and they are exclusively used for definitions, i.e., the actual implementation of the "things" that make up the program or library. Usual extensions: .cpp, .C, or .cxx (normally, .c is reserved for C source files, not C++).

Translation unit (TU): A technical term for a source file, once the compiler has looked at all #include statements and substituted them for the header files they refer to.

Object file: Simplest binary file which comes from the compilation of a single translation unit (TU) into executable code. Usual extensions: .o or .obj (MSVC).

Static library (or static-link library): A binary file which is very similar to an object file but is larger and usually the product of many object files combined into one file for convenience. The GNU stack refers to static libraries as archives, i.e., collections of object files. Usual extensions: .a or .lib (MSVC).

Import library: A special static library that is used by the MSVC toolset to make the static link between a standalone executable and a dynamic-link library (DLL).

Dynamic library (or dynamic-link library or shared object): A binary file also created from a set of object files, but they are packaged as a standalone executable library (not a program). Usual extensions: .so (POSIX) or .dll (Windows).

Executable program: This is a type of binary file that can be started and executed, i.e., it has an execution entry-point. In short, this is a program you can run. Usual extensions: <nothing> (POSIX) or .exe (Windows).

Basic C++ Sources

Nominally, the source code of C++ projects are composed of two kinds of files: headers and source files. Both are just text files, but with distinct purposes. A header file is a file that is meant to be included in any source file that needs to use the "things" that the header file declares. In that sense, a header is often no more than a set of declarations (or stubs) for functions and classes that can be used in any source file that included that header.

Here is a basic example of a simple header file (e.g., my_simple_header.hpp):

#ifndef MY_SIMPLE_HEADER_HPP
#define MY_SIMPLE_HEADER_HPP

// declaration of a free function:
double sin(double x);

// declaration of a class ..
class Foo {
  public:
    // .. with a data member:
    int bar;
    // .. with a static data member:
    static int value;

    // .. with a const member function:
    void hello() const;
    // .. with a static member function:
    static void world();
};

// declaration of a global variable:
int global_var;

#endif

In that example, there are many declarations of "things". By "things", I mean any of a lot of different types of entity that exist in C++, which include functions, classes, data members, static data members, global variables, to name a few. You can notice in the example header that no "real code" can be found. This is not a strict requirement (see next section) but it is the usual practice (and for good reasons).

Another important aspect of header files is the header-guard. A header-guard is pre-processor condition (the #ifndef MY_SIMPLE_HEADER_HPP part) which guarantees that the header file's content appears no more than once in a given translation unit. In C++, we have the One Definition Rule (ODR) which says that "things" should only be defined once in the entire program and some "things", like classes, should only be declared once in a translation unit. When the pre-processor (first pass of the compiler that looks for # commands) sees a #include command, it finds the file in question and performs a copy-paste of its content to where the #include command appears. Without header-guards, the same content could appear several times, and even lead to an infinite recursion (i.e., an inclusion-cycle). In short, always use header-guards when writing a header file. The name used in the header-guard is arbitrary but should to be unique, and thus, by convention, it's often the name of the header file itself, in capital letters, with the extension, and with a library-specific or project-specific prefix.

The other type of file is a source file, i.e., the cpp files. They contain the "real code", that is, the implementation of the "things" that were declared in an associated header file. Source files are compiled one by one, each becoming a separate object file once compiled. This is called the separate compilation model which is fundamental to C, C++, D, and most other compiled languages. In this compilation model, each source file must include every header file it requires to work. For the compiler, any code that is not in the source file or declared in one of the included headers does not exist, that is, the compiler looks at one translation unit at a time, in isolation. This can seem inconvenient, but this way the compiler only has to deal with small chunks of code rather than chew up the entire code-base in one big gulp (if that was the case, most large C++ projects would not compile on an ordinary PC, without running out of memory or time).

Enough talk, here is an example source file corresponding to the header file example from above:

// include the necessary header files:
#include "my_simple_header.hpp"

// definition of the free function:
double sin(double x) {
  /* ... code goes here ... */
};

// definition of the static data member:
int Foo::value = 0;

// definition of the const member function:
void Foo::hello() const {
  /* ... code goes here ... */
};

// definition of the static member function:
void Foo::world() {
  /* ... code goes here ... */
};

// definition of the static member function:
int global_var = 0;

This gives an example of how a source file is organized: it first includes its corresponding header, then any other header it needs, and finally, the definitions of each of the declared elements of its corresponding header.

For most day-to-day programming in C++, this is pretty much it. Some advanced topics follow in the next section.

Advanced C++ Sources

The previous section might have given the impression that some things are not very strict, and that is a correct assessment, C++ is a very free-form language based on simple compilation rules (e.g., the ODR). The previous section explained the basic, conventional way to organize things. Beyond that, we can deviate on most aspects whenever it's useful to do so, and I stress, when it's useful (don't break conventions carelessly).

For instance, there are no rule about file extensions in the #include command, it's just a "copy-paste" operation after all. Of course, it's clearer when you use conventional extensions and only include headers, but if need be, it's possible to deviate from that. In C++, we mostly stick to conventional practice, but there are always open doors for special needs.

Another strict, but not inviolable, rule is the ODR (One Definition Rule). Specifically, I said that headers should only contain declarations. This is not the complete story. Very often, functions are very small, like only a few lines or less. In those cases, it can seem tedious to declare the function in the header and then implement it in the source file. To solve this, we can simply make an inline definition. This means, we just put the definition in the header file, and mark the function with the keyword inline:

#ifndef MY_SIMPLE_HEADER_HPP
#define MY_SIMPLE_HEADER_HPP

// declaration of an inline function:
inline double sin(double x) {
  /* ... code goes here ... */
};

#endif

N.B.: when I speak of inline definitions, I am not talking about inlined functions (or function inlining), the former is a decision by the programmer about where to place the definition, and the latter is a decision by the compiler about how to best optimize the executable code. However, function inlining does require inline definitions.

For member functions, inline definitions can be written directly within the class declaration, in which case the inline keyword is not needed, or outside the class declaration but still within the header file, in which case the inline keyword is necessary. As so:

#ifndef MY_SIMPLE_HEADER_HPP
#define MY_SIMPLE_HEADER_HPP

// declaration of a class ..
class Foo {
  public:
    void hello() const {
      /* .. inline code goes here .. */
    };

    void world();
};

inline void Foo::world() {
  /* .. inline code goes here .. */
};

#endif

Beyond the convenience, inline definitions of simple functions also make them candidates for function inlining, which is when the compiler optimizes the code by replacing function calls with the actual code of the function. In other words, an inlined function does not require a jump to the function's code, but simply executes the body directly within the calling context. So, this can be a performance gain for simple functions, but that is something for the compiler to decide.

Finally, another important reason to put definitions in a header file is to define a template. The subject of templates is beyond the scope of this tutorial, but suffice to say, a function template or a member function of a class template must be defined in the header file because they are not actual code but rather instructions to the compiler on how to generate code for a particular instance of the template, which means the definition must be available in all translation units in which the template is used.

Compilation

We now move on to explaining the compilation process. In simple terms, compilation means to turn source code into binary code, i.e., executable code. The purpose of this section is not to explain all the details of how a compiler does its work, but just give a brief overview, at the level of detail that I think most programmers are interested in and can be useful in the grand scheme of things, i.e., demystifying the compilation process.

Compilation of C/C++ code happens in three main steps (excl. linking), which could roughly be grouped under three terms:

  1. Pre-processing
  2. Syntactic analysis
  3. Code generation

Compiler writers will tell you there are many more steps, but I'm just lumping them into these three conceptual steps, for sake of brevity. Typically, steps 1 and 2 are in the front-end of the compiler program, while the last step is in the back-end, because the back-end is mostly language-independent and can be re-used by multiple compilers directed at different language (e.g., GNU Compiler Collection has just one back-end for all their compilers (C, C++, Ada, Objective-C, etc..), and the same for other compiler suites (ICC, Clang, etc.)).

Pre-processing is the first major pass over the entire source code. In short, anything in C/C++ code that begins with a # character is a pre-processor command. The pre-processor is mostly a glorified find-and-replace tool. Case in point, the pre-processor does MACRO substitutions, i.e., code like this:

#define TEST "Hello, World!"
const char str[] = TEST;

gets turned into this:

const char str[] = "Hello, World!";

and similarly for function-like MACROs. Also, when the pre-processor sees a command like #include <cstdlib>, it goes to fetch the cstdlib header file, and copies its content in the place of the include command.

Additionally, pre-processing involves some basic logic. In particular, the pre-processor evaluates conditionals, like #if, #ifdef or #ifndef, and will conditionally leave code in or eliminate it depending on the evaluated condition. These conditional compilation blocks are generally used as a kind of "code configuration" (e.g., enable platform-specific code, or workarounds for particular compilers). And finally, the pre-processor often handles some compiler-specific extensions, mostly starting with #pragma (i.e., #pragma is reserved for non-standard pre-processor extensions, e.g., #pragma once for a non-portable header-guard).

Once the pre-processor has finished, we can say that a translation unit (TU) has been fully formed from a source file with all its included headers fetched and copied into it, all its MACROs found-and-replaced, and all its conditional compilation blocks resolved. At this point, the output would look like one big piece of C++ code, with comments removed, and with some added annotations to facilitate the next step in compilation (or diagnostics of errors).

Syntactic analysis is a rough term I use for all the parsing that checks language rules and builds an internal representation of the code. This is when the compiler tries to "understand" your code. The language standard (e.g., C++ ISO Standard) is a document that describes, in detail, how the compiler should interpret / understand the code. A "standard-compliant" compiler is one that does so correctly in all cases imaginable (very very few compilers are strictly standard-compliant, but most are "close enough").

All the steps involved in syntactic analysis are too complicated to explain here, they involve terms like lexical analysis, syntax analysis, context-dependent grammars, semantic analysis, etc... Suffice it to say, broadly speaking, the compiler builds up an Abstract Syntax Tree (AST) and a symbol table. The former being a tree representation of the code, where, for example, an operator would be a node and its operands would be its branches. And the latter being simply a table of all the "things" that exist in the code, like functions, classes, variables, etc.., as a big look-up table arranged by "symbols" (a (hashed) string formed from the original name and type of the "thing" in question). Once the compiler has done that, it pretty much "understood" the code and now has a useful in-memory representation. Before generating code, the compiler will typically do some static analysis, or high-level optimizations, by simplifying the AST while preserving the behavior (e.g., DU-analysis (DU: Definition-Usage) to re-organize things in an equivalent but simpler manner).

Code generation refers to the last big step in the compilation process, i.e., producing the binary code. Once the compiler has done syntactic analysis and performed some high-level optimizations, it produces an early translation into a very simple language, usually referred to as Intermediate Language (IL), which is usually somewhere between a C dialect and a kind of platform-independent pseudo-assembly language (e.g., Comeau's IL is ANSI C, while GCC's IL looks more like an assembly language). This intermediate output is not really meant to be human-readable (in the same sense that most programmers would say assembly language is not human-readable, but of course, some programmers read and write assembly with great competence and effectiveness, but they are a "special breed").

Code generation, at this point, is more like people's basic intuition of what a compiler does: maps source code to binary code (machine instructions). Because the IL is very simple, high-level things like "classes" or "virtual functions" have disappeared, and it's now just very simple, low-level code. Nevertheless, there is still significant room for code optimization, i.e., this is when the compiler streamlines every bit of code as much as possible, including tricks to spare a few instructions, making good use of registers, re-ordering operations for optimal performance on the target platform, etc.. Programmers who engage in manually doing these kinds of optimizations call it nano-optimization and micro-optimization, referring to the miniscule scale of these optimizations (but can have a huge impact when it's in a fast loop). However, compilers have become, in recent years, incredibly talented at doing code optimization, meaning that programmers shouldn't try to do this manually, unless there is good evidence that there is a need for it.

This is pretty much all the average programmer really needs to know about the compilation process. After all, your concern as a programmer should mainly be the correctness, clarity and maintainability of your code, but it's always good to know what the compiler does, as it can inform certain decisions while coding.

How does the compiler find headers?

One recurring question from people beginning to do some C/C++ is: How does the compiler find headers? I thought I would answer this here, because it's simple to answer, but many beginners lose hours of scratching their heads at this, and worse, coming up with crazy solutions that are not really practical.

It's really quite simple. In an #include command, the file name is always a relative path to a header file. Relative to what? That's the important part. The compiler (pre-processor) maintains a list of include-directories, which is a list of all the directories it will consider and look for header files (by adding to it the relative path given in the include command). This is an ordered list (i.e., goes down the list, takes the first match). At the very least, that list has a number of system directories, where you would typically find the standard headers, and maybe installed library headers. In Unix-like systems, that system directory is usually just /usr/include and/or /usr/local/include. In Windows, things are less straight-forward (depends on your setup).

Then, you can tell the compiler to consider additional include-directories. Configuring those additional directories is done either within your project configuration (in the IDE), within your build script (e.g., makefile or using cmake), or simply as a command-line option when invoking the compiler manually (the option is -I), for example:

$ gcc -Wall -I/home/user/my_lib/include my_test_program.cpp -o my_test_program

It's really all that simple. In addition to the list of include-directories, the compiler can also consider the current directory, that is, the directory in which the current source or header file is. If you want that directory to be considered in the search for the header file, you must use the double-quote notation for the include command, like #include "my_header.hpp" instead of #include <my_header.hpp>. Note, however, that this is not strictly required by the standard, but all compilers, AFAIK, implement this behavior.

As a final note, I will also mention that there is a special class of include-directories, called "system include-directories", which are useful because any compilation warnings coming out of any header file from those directories will be supressed from the output. So, if using some external header files that cause warnings, then specify the include-directories as "system", the compiler command-line option is simply -isystem dir instead of -Idir.

Linking

Now that I have covered the compilation process, you may ask: We've got binary code already, what's more to talk about? The answer is: linking. Linking is a major part of the ultimate goal, which is to get a running program. Linking is important to talk about because it is a frequent source of errors and frustrations among beginners. In short, linking means putting everything together.

If we consider this simple source file (hello.cpp):

#include "hello.hpp"

#include <iostream>

void hello() {
  std::cout << "Hello World!" << std::endl;
};

where hello.hpp is a header that simply declares the hello() function. We can compile it to obtain an object file as so:

$ g++ -Wall hello.cpp -c -o hello.o

which produces the object file called hello.o (the option -c tells the compiler to only perform compilation, no linking). So, we know that hello.o contains binary code, but how is it organized? The binary code is organized into a number of sections in the file, each under a symbol. We can print out the list of symbols using this command:

$ nm hello.o

and I get (on my system):

                 U __cxa_atexit
                 U __dso_handle
000000000000005f t _GLOBAL__sub_I__Z5hellov
0000000000000022 t _Z41__static_initialization_and_destruction_0ii
0000000000000000 T _Z5hellov
                 U _ZNSolsEPFRSoS_E
                 U _ZNSt8ios_base4InitC1Ev
                 U _ZNSt8ios_base4InitD1Ev
                 U _ZSt4cout
                 U _ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_
0000000000000000 b _ZStL8__ioinit
                 U _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc

which may look like just a bunch of gibberish to the layman, but even as a programmer with limited knowledge of the compiler's internal mechanics, some meaning can be drawn from it. The first column looks like some kind of addresses or offsets. Then, the single-lettered column looks like some kind of classification. And finally, the last column looks like some sort hashed string containing some recognizable names ("hello", "ios_base", "char_traits", "basic_ostream", "cout", "endl", etc..). That last column lists the symbols in the object file (with some "decoration" or name-mangling), along with their classifications in the second column. Without explaining everything thoroughly, I can explain that the row 0000000000000000 T _Z5hellov means that there is a function, called "hello", defined in this object file, at offset 0 (at the start). Then, I can point to all those U rows, which declare symbols that are "Used" within that object file. This is really all I need in order to explain, in basic terms, what the linking process does.

Let's say I have another source file (hello_main.cpp):

#include "hello.hpp"

int main() {
  hello();
  return 0;
};

Then, I compile and check its symbols:

$ g++ -Wall hello_main.cpp -c -o hello_main.o
$ nm hello_main.o
0000000000000000 T main
                 U _Z5hellov

As you see, this object file is much simpler, it has a main() function and it uses (or needs) the hello() function. That symbol is defined in my hello.o object file. So, I can put them together to make the program:

$ g++ hello_main.o hello.o -o hello_world
$ ./hello_world
Hello World!

Now, that's linking. By providing only object files to the g++ compiler, it will see that it doesn't need to compile anything, and simply links them together to produce the output hello_world program. I could have used the linker program directly, which is called ld, but the commands are a bit more cryptic, so I just used g++ instead.

As you can probably figure out by now, the job of the linker is to take all the object files and libraries that it is given, and find a way to satisfy all the "U" (or "used") symbols by finding a definition for each of them. It lives up to its name: it completes all the missing links among the different pieces of code. The way it achieves this is fairly straight-forward (from a high-level perspective). It just assembles a big list of all the defined symbols (a big hash-table), and then, goes through everything again to make the links that are needed. That's it. That's why this is often called the "stupid linker model", because very little intelligence is required by the linker to do its job.

As a programmer, the most you will probably ever hear from the linker directly is when you forget some of the object files or libraries, and thus, some of the "U"'s will be left unsatisfied, and you will see an error similar to this:

$ g++ hello_main.o -o hello_world
hello_main.o: In function `main':
hello_main.cpp:(.text+0x5): undefined reference to `hello()'
collect2: error: ld returned 1 exit status

Or, the following if you have multiple definitions of the same symbol:

$ g++ hello_main.o hello.o hello.o -o hello_world
hello.o: In function `hello()':
hello.cpp:(.text+0x0): multiple definition of `hello()'
hello.o:hello.cpp:(.text+0x0): first defined here
collect2: error: ld returned 1 exit status

When I hear people say that compiler error messages are not clear, that they are too cryptic, I always say that this is only because they don't understand how the compiler works. When you understand which step failed (compilation, linking, etc.), what the compiler / linker expects, and what vocabulary these programs use (most of which I have tried to use and highlight in this tutorial), then those messages are very obvious and contain a wealth of information on how to fix the issue, as long as you are not afraid of a few hashed strings and a few hexadecimal numbers, which are two things that no programmer should ever be afraid of.

Dynamic vs. Static Libraries

Another big question-mark often hovering above novices' heads is: What on Earth are these static / dynamic libraries I always hear about? Let me clear that issue right up for you. For one, the "static" and "dynamic" terms here refer to the time at which the linking is done, either at compile-time or at run-time, respectively.

A static library is really just a collection of object files. Often, for modularity, source files are kept fairly small, and thus, a large library will end up being constituted of a large number of object files, once each source file is compiled individually. Listing out all those object files when creating programs that use them is not very convenient, and thus, regrouping them into a single file (or a few files) is much nicer. This is all that a static library is, an archive of object files. In fact, the GCC suite uses the ar program to create static libraries, which is just a crude and simple archiving program (to put multiple files into one file).

Here is our hello world example using a static library:

$ g++ -Wall hello.cpp -c -o hello.o
$ ar rcs libhello.a hello.o
$ g++ -Wall hello_main.cpp -o hello_world -L ./ -lhello
$ ./hello_world
Hello World!

where the -L ./ option tells the linker to add the current directory ./ to the list of directories to look for libraries to link with, and the -lhello tells the linker to link with the "hello" library, meaning that it will try to find a library file called libhello.a or libhello.so (dynamic). Note that if a dynamic library exists, it will be preferred unless the -static option is used.

A dynamic library is quite a bit more complicated. The idea here is that not only do we want to collect a large amount of binary code into a single file (like with a static library), but want to avoid having to bring all that code into the final executable programs. If many programs use the same library code, this can be a significant saving in the collective size of all those executables. To achieve this, we need the executable program to load and use the library as it runs, i.e., dynamically. This means the executable needs the dynamic library to run (i.e., it's a run-time dependency). Additionally, the executable needs to link with the dynamic library on its own, long after compilation, i.e., this is run-time / dynamic linking.

To achieve dynamic loading and linking of the library, there needs to be more machinery put in place than what a static library has. First, "loading" means putting the code into RAM memory, loading global variables into the process address space, and running static-initialization code. To be able to do this, a dynamic library needs all the same machinery that an executable program has, and because the operating system handles the loading, a dynamic library must be packaged into a format similar to an executable (in Unix-like systems, that file format is called ELF, for "Executable and Linkable Format", in Windows, it's called PE, for "Portable Executable"). And for that reason, dynamic libraries are more like executable programs than they are like static libraries, in fact, technically-speaking, a dynamic library is just an executable without a "main" function.

Here is our hello world example using a dynamic library:

$ g++ -Wall hello.cpp -c -o hello.o
$ g++ -shared -fPIC -Wall -o libhello.so hello.o

which creates a dynamic library (shared object) called libhello.so. The options -shared and -fPIC are necessary for creating a shared object. Now, we can compile the main program and link to that dynamic library:

$ g++ -Wall hello_main.cpp -o hello_world -L ./ -lhello
$ ./hello_world
./hello_world: error while loading shared libraries: libhello.so: cannot open shared object file: No such file or directory

Oops... what happened? The program was successfully compiled and linked to the dynamic library, however, as I said, the dynamic library is required in order to run the program. But how does the program locate the dynamic libraries it depends on? It uses the system's default locations, which are system directories where you would normally install dynamic libraries, like /lib, /usr/lib or /usr/local/lib for Unix/Linux, and the "system" and "system32" directories in Windows. To find a dynamic library in the current directory (of the program), one option is to add the current directory to the library paths environment variable:

$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./
$ ./hello_world
Hello World!

But, for a more permanent solution, the library path can be baked into the executable instead, to avoid modifying the user's environment variables:

$ g++ -Wall hello_main.cpp -o hello_world -Wl,-rpath,./ -L ./ -lhello
$ ./hello_world
Hello World!

where the sequence -Wl,-rpath,./ tells the linker to add the current directory ./ (relative to where the program is) to the executable such that it will be considered first when looking from the dynamic library. For a more detailed explanation of options and peculiarities of dynamic libraries, refer to this nice page.

As a final note on dynamic libraries, I must remind people that being a standalone executable, a dynamic library must be considered as a separate "module" from the executable program that loads / links-to it. This causes a number of non-trivial issues. For example, C++ exceptions cannot propagate across a module boundary because one module will not be able to make sense of an exception coming from another.

Another issue is that you cannot control which compiler, version or options were used to compile the dynamic library versus the executable, and you won't be able to see the mismatch until you run the program, and even then, you might not. This is because the C++ standard does not define a standard binary layout for objects in memory, and does not define the internal data members of standard library classes (e.g., std::string, std::vector, etc.), which, together will call Application Binary Interface (ABI). Often compilers, versions and options differ in the memory layout of their standard classes (e.g., to add debug information, or just improvements to the implementation between versions). In practical terms, this means that you cannot share C++ classes between modules (dynamic libraries / executables).

Finally, compilers can choose to use different hashing methods to mangle the names of the classes and functions exported from a library, which may not be compatible with each other. The only way to avoid this problem is to disable name-mangling altogether. We can disable name-mangling using extern "C" on exported functions. When marked as such, a function looks, to the linker, like a C function, with an unmangled name. Name-mangling is necessary for C++ namespaces and for function overloading (functions with same name, different parameters). This means that those features cannot be used on extern "C" functions.

So, long story short, when you write shared-objects / dynamic-libraries, you must have a C interface (API), where all functions and types shared between the modules are in C (or nearly so). Under the hood of those functions, of course, the library can be implemented in C++.

Conclusion

I hope that this tutorial has enlightened you about the overall process that takes source code to executable programs in C++. Or at least, I hope some holes in your understanding have been plugged. I tried to also insert a few guidelines throughout this tutorial, but the main focus was about the fundamental understanding of how things are done under-the-hood, because that is, in my experience, the most important tool to help diagnose errors and to plan the organization and build configurations for a large project. And if you read up to this sentence, then kudos to you for keeping up with my vichyssoise of verbiage that veered most verbose.

DeanMSands3 commented: Bookmarked! +5
Labdabeta commented: Excellent, as usual! +6
KaeLL 0 Newbie Poster

Nice article! (y)
Things like static and ( specially ) dynamic linking weren't very clear until now!
And I couldn't help to notice the reference to V at the end! :D

Lutina 2 Newbie Poster

Nice article..........

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.