I am reading the (otherwise excellent) The C++ Programming Language (4th edition - Hardcover) (Stroustrup) and I cannot understand about 4 pages of it despite a lot of attention to it.
It's section 27.4.1 Composing Data Structures starting page 767.
After reviewing obviously unsatisfactory alternatives on page 767 for a tree structure, the author propose on the next page (768) a very complicated structure for obscure incomprehensible reasons.

Why not just use the following (apparently satisfactory) structure, that would likely be the first to come to mind to any programmer approaching the case:

template <typename DATA>
struct node{
    node <DATA> *    left;
    node <DATA> *    right;
    DATA            data;
};

No need of base class. No need of recursive base class (!!!!! why ????). Full access to data from the node.

What am I missing?

Recommended Answers

All 23 Replies

template <typename DATA>
struct node{
    node <DATA> *    left;
    node <DATA> *    right;
    DATA            data;
};

That makes no sense. A node needs to point to another node, not to a DATA.

A node needs to point to another node, not to a DATA.

It does. node<DATA>* is a pointer to a node.

Why not just use the following (apparently satisfactory) structure

I haven't read the book, but presumably the solution presented hides the implementation of the nodes, whereas your struct exposes everything.

Just so that people understand what this discussion is about. I'm gonna post the essential bits of code that from that section of the book.

The first solution that he puts forth (that you are critiquing) is this:

template <typename Node>
struct node_base {
  Node* left;
  Node* right;

  node_base() : left(nullptr), right(nullptr) { };

  void add_left(Node* ptr) {
    if( !left )
      left = ptr;
    else
      // .. do something else (?)
  };

  //... other tree-manip functions..

};

template <typename ValueType>
struct node : node_base< node<ValueType> > {
  ValueType value;

  node(ValueType aVal) : value(std::move(aVal)) { };

  //... other value-access functions..
};

Now, at face-value, this seems, as you said, sort of pointless, but I would disagree, and even more so considering that this is a bit of a set-up for what comes later in the same section (the next couple of pages), where the motivation for this becomes even more.

But you can already see a hint of what the purpose of this "very complicated structure" is. And by the way, if you think this is a complicated structure... man, wait until you get a load of some serious data structure implementations, this thing is a piece of cake in comparison. So, the thing to observe here is that in the base class I wrote "other tree-manip functions" and in the top-level class I wrote "other value-access functions", and that's already one reason (and not so obscure either) for splitting things up like that, because, if nothing else, it puts the code related to tree manipulations in one place and the code for manipulating the data (set, get, compare, etc.) in another. And that can be a benefit by itself. But I grant you that that alone is not that great and one could argue that keeping things simpler (as in, all in one class) is actually better for clarity even with the mixed purposes.

There is, however, a practical advantage to this separation. Think about this for a moment: what if you want to specialize the node for a particular value-type? Consider this solution:

template <typename Node>
struct node_base {
  Node* left;
  Node* right;

  node_base() : left(nullptr), right(nullptr) { };

  void insert(Node* ptr) {
    if( static_cast<Node*>(this)->compare( ptr->get_value() ) ) {
      if( right )
        right->insert(ptr);
      else
        right = ptr;
    } else {
      if( left )
        left->insert(ptr);
      else
        left = ptr;
    };
  };

};

template <typename ValueType>
struct node : node_base< node<ValueType> > {
  ValueType value;

  node(ValueType aVal) : value(std::move(aVal)) { };

  const ValueType& get_value() const { return value; };

  bool compare(const ValueType& rhs) const {
    return (value < rhs);
  };

};

Now, with the above, I can easily specialize the node class template for a particular value type, such as a const char*:

template <>
struct node< const char* > : node_base< node< const char* > > {
  const char* value;

  node(const char* aVal) { 
    std::size_t l = std::strlen(aVal);
    char* tmp = new char[l + 1];
    std::copy(aVal, aVal + l + 1, tmp);
    value = tmp;
  };

  node(const node&) = delete;
  node& operator=(const node&) = delete;

  node(node&& rhs) : value(rhs.value) {
    rhs.value = nullptr;
  };
  node& operator=(node&& rhs) {
    value = rhs.value;
    rhs.value = nullptr;
    return *this;
  };

  ~node() {
    delete[] value;
  };

  const char* get_value() const { return value; };

  bool compare(const char* rhs) const {
    return (std::strcmp(value, rhs) < 0);
  };

};

You see, the point with this is that when I specialize (or replace) the implementation for the "node" class (which only handles the data, not the tree manipulations) I only have to rewrite the parts that relate to the type of data I'm storing in it. The point is, I don't have to repeat the code that does the tree manipulation (e.g., the "insert" function, in that example). This is one of the classic purposes of the Curiously Recurring Template Pattern (CRTP).

But it's when the author introduces the Balancer policy that this whole thing becomes even more clear. Now, you can take a policy like the balancing algorithm and imbue the base class with it, without affecting the implementation of the derived class.

These types of tricks are extremely useful when creating generic data structures because you have to surgically manipulate things like that, and it becomes very important to make precise "orthogonal" cuts through the components and their functionality.

commented: Very clear +9

Superb explanation. Sounds like you should co-author the 5th edition!
Needless to say, I will be thinking more about this matter.

Another 2 things that I don't understand.

1)

Section 28.2.1.1 p784
I understand the general idea that you want your conditional not to trigger an evaluation of the second part wether you use it at run-time or compile-time when the first part is true.

However he seems to be saying that for that reason he can't use his alias on conditional.

But in the following example, I am doing just that and it compiles flawlessly (with vs2013 and g++4.8)

/*
Stroustrup does not describe is Error<T> so I wrote this 
structure ok that triggers error if set with B=false 
for the purpose of this routine
*/
template <bool B>
struct ok{
    static_assert(B, "error occured");
};

template <bool B, typename T1, typename T2>
using Conditional = typename std::conditional<B,T,F>::type;

// next line compiles perfectly... and is using "using" !
Conditional<std::is_integral<int>::value, int, ok<false>> x=234; 

So what didn't I understood here?

2)

Section 26.3.6 Overaggressive ADL

std::cout <<"Hello, world" <<endl;
Strousstrup comment: OK because of ADL

But it is certainly NOT ok on my compilers! (unless I add std:: or using namespace std, which shouldn't be needed according to him).
Is this simply that g++ 4.8.2 and vs2013 or not conforming for this?

1) Section 28.2.1.1 p784

You are indeed missing the main point here. The reason why you want to avoid the template instantiation is because of the failure case, not the successful case (as you tested). The thing is, if you want to instantiate a template like Condition<A,B,C>, the types A, B, and C must be instantiated before you instantiate the Conditional template. By analogy to normal functions (instead of meta-functions, what Stroustrup calls "type functions"), before calling a normal function, each parameter must be evaluated. Similarly, before instantiating Conditional, all arguments (A,B,C) must be instantiated.

The case that Stroustrup is describing here is when one of the arguments (say, B) cannot be evaluated when the condition (A) is false. This isn't really a matter of template aliases versus the "old-style" typename ..::type technique. For example, if you had the following:

typename std::conditional<
  std::is_integral<T>::value,
  typename std::make_unsigned<T>::type,
  T
>::type

There is a problem because when the typename std::make_unsigned<T>::type argument cannot be instantiated (because the T has not unsigned variant), then the whole thing cannot be instantiated. In reality, in all the cases when make-unsigned would fail, we also know that is-integral would instantiate to false, and therefore, the make-unsigned argument is never really needed in that case. In other words, the make-unsigned argument is prematurely instantiated, and this can cause obvious problems. In the example here, when is-integral fails, the conditional is supposed to return T, but instead, it will fail with a compile-time error.

This is also a classic problem when using SFINAE, as in, consider this:

template <typename T>
typename std::enable_if< 
  std::is_integral<T>::value,
  typename std::make_unsigned<T>::type
>::type foo() {
  //...
};

template <typename T>
typename std::enable_if< 
  !std::is_integral<T>::value,
  T
>::type foo() {
  //...
};

int main() {
  foo<std::string>();
};

The problem with the above is that the SFINAE won't apply because it will fail too early. The rule with SFINAE is that the failure has to occur at the upper-most level. In other words, the failure to do the substitutions for the types of the function parameters (or return value) is "not an error", but the failure to do the substitutions for the argument-types of the templates that need to be instantiated to form the substitutions required to form the function signature is indeed an error. And so, the code above will fail, instead of gracefully selecting the second version of the foo() function. The solution to the above issue with SFINAE is the following:

template <typename T>
typename std::enable_if< 
  std::is_integral<T>::value,
  std::make_unsigned<T>
>::type::type foo() {
  //...
};

template <typename T>
typename std::enable_if< 
  !std::is_integral<T>::value,
  T
>::type foo() {
  //...
};

Stroustrup may not like this old-style typename ..::type mechanism in favor of the newer template alias mechanism. However, there are a lot of non-trivial issues that actually make this old-style mechanism very desirable. If you are interested in those reasons, you should look at David Abrahams' book "C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond". Abrahams is really the authority when it comes to working out all the practical issues of TMP.

2) Section 26.3.6 Overaggressive ADL

I frankly don't know what Stroustrup was smoking when he came up with that example. This is definitely wrong. It is "argument-dependent" lookup, which means that the lookup of the function (or operator) is dependent on the arguments, not the other way around. There is no conceivable way that the compiler will figure out that endl should refer to std::endl just because ADL on std::ostream resolves to an operator in the std namespace. At least, if this is true, it is extremely surprising to me. I can certainly forgive Stroustrup for this error, as you are bound to make some errors when writing a comprehensive book like that, but that's definitely an error, ADL simply does not work that way, if it did, it would shatter everything I know about ADL.

Many thanks for these immensely useful and wise comments.
Greatly appreciated.

I have figured out why the erroneous std::cout example was given.
The example is based on an apparently classical valid example (well described in wikipedia ADL entry) one completely deformed to an erroneous one in the text. I suspect Stroustrup had asked one of his student to write that section, and the student knew about the matter as much as I did :-) (which would not be a desirable thing).

Basically the valid example is that:
without ADL
std::cout << "anything";
would not compile (unless a preceding using namespace std is present) because the operator used here is a global
operator<<(std::ostream, const char *)
that does not exist.

However because of ADL presence, the std namespace (picked from the first argument) is also searched and the
std::operator<<(std::ostream, const char*)
is found, and therefor the statement compiles.

Now, given that my confidence in understanding C++ has been thoroughly shaken once again, to be on the very very safe side, in section 25.2.2 the use of string litteral as template argument (apparently correctly declared erroneous on page 724) but used later happily in 2 examples on page 725
Vec<string,""> c2
and later
Vec<string,"fortytwo"> c2;
that surely must have been written by the same student who wrote the ADL example, right?
Unless it is because perhaps C++14 or C++17 will allow it (?) and the book is a bit in advance on the times, or for another obscure reason(?) that is above my capacity at this point to understand.

And just when I thought I understood ADL, I realise I still have no clue at all of what it is and what it doing

namespace sp{
    bool b;
    void doit(bool){}
}

void main(){
    doit(sp::bool); // Won't compile ! Why is ADL not  working here?
}

The problem with C++ is that even after 5 years of reading and trying it, it is still (for most of us, anyway) impossible to figure it out, and I am talking not just the myriad exceptions and horrendous hacks absolutely required to make it a usable and useful language, but even to understand the language rules themselves (rules that apparently even the creator of the language doesn't fully understand himself either as this discussion appears to evidence!)

I think the future will almost certainly be more a functional/lisp style language with close to zero syntax and low level access possible.

I just don't think that, when the older C/C++ generation retire, there will be enough replacement in the younger generation that knows the language sufficiently.

There are very valid historic reasons why C++ develop to be be fully incomprehensible and why a generation was able to cope with it (because they grew up with it and the burden was imposed to them over decades in small increments ), but there is no way the younger generation will want to shoulder the burden of this unecessary complexity (nor have the time to lose trying to deal with it) especially compared to the simplicity and power of the aforementioned alternatives

First of all sp::bool does not exist, you probably meant sp::b there.

But even then it won't work because ADT is about where the type of the argument is defined, not where the variable is defined (the argument doesn't even need to be a variable). The type of the argument here is bool and since bool is not defined in the sp namespace, that namespace won't be part of the lookup.

If you define a type in sp (maybe as a typedef) and then make b an instance of that type, the lookup will behave as you expect. It will even still behave that way if you define b outside of the namespace (as long as the type is defined within the namespace).

Ok, now I understand a little more of ADL

namespace sp{
    struct a_class{}a;
    void doit(a_class){}
}
void(){
    doit(sp::a);
}

Why is this working?
Oh, very easy!
You only have to look at the C++ standard book (older written version, section 3.4.2 p33) to figure out 1 pages of dense rules and learn (if I understood it correctly!!!) that ADL is based on the type location and not the variable location itself and there are about 1+8+3+2 special cases to consider.
Don't forget to reread this in the future when obscure bugs creep in in your code.
Fortunately there are only 770 similar pages in the language specification to learn...

the very very safe side, in section 25.2.2 the use of string litteral as template argument (apparently correctly declared erroneous on page 724) but used later happily in 2 examples on page 725
Vec<string,""> c2
and later
Vec<string,"fortytwo"> c2;
that surely must have been written by the same student who wrote the ADL example, right?

Wow... you are pointing out some real flaws in that book. I'm starting to have doubts about the care that Stroustrup put into that book (which I have not read, beyond what you have pointed out so far), because these are some serious things that I'm pretty sure any competent reviewer should have picked up on. In that example, it is not quite using a string literal as a template argument, but it's still wrong, in fact, it's double wrong. The idea here is that the string literal is (presumably) converted to a constexpr std::string object and that that object is used as the template argument. That's wrong because (1) you cannot use arbitrary literal types as template value parameters (only integral types), and (2) a constexpr string is not even a literal type because it is a non-trivial class. There was a proposal for C++14 for allowing arbitrary literal types as template value parameters, but even though Stroustrup states in that section of the book that this restriction is there "for no fundamental reason", I think that this proposal is dead in the water, AFAIK, because there are indeed fundamental reasons (and pretty obvious ones too!) why this can't work (it could be extended to non-integral built-in types, or something, but not much beyond that).

And just when I thought I understood ADL, I realise I still have no clue at all of what it is and what it doing

As sepp2k said, ADL is all about the types, not about the variables or values that are passed. ADL might seem complicated, and it is indeed not the simplest thing, but when you think about it a little you will see that those rules are very logical and make intuitive sense.

To be honest, unless you are writing a compiler, you don't really need to know all those rules in detail. I don't know all the rules in detail, and I never needed to in many years of advanced C++ programming. Because C++ is a strict and formal language, the rules for name lookups, ADL, template deduction, overload resolution and so on, have to be very detailed and complex (to handle every corner case), and if you really want to study them you can look up the digests of those rules on the cppreference pages on ADL or overload resolution. But like I said, if you don't intend to write a compiler, the details of these rules are rather useless. In my experience, all those rules boil down to "it works as expected, most of the time". If you just follow your intuition as to how calls should get resolved (i.e., what choice makes more "common sense"), you will be correct 99% of the time, and the rest of the time you'll get a "ambiguous" error which usually makes sense and can be solved fairly easily.

And it's also important to understand that the complexity of these look-up rules are not something that is unique to C++ or that is somehow a design flaw of C++. C++ has a number of very powerful features, notably:

  • Templates, which enable generic programming and meta-programming.
  • Overloading, which enables static polymorphism and complex compile-time dispatching mechanisms.
  • Consistent CV-qualification (e.g., const-correctness), which help guarantee correctness and perform static analysis.
  • Namespaces and consistent scoping rules, which enable strong separation of libraries and implementations scopes, leading to robust and simplified library coding.

These features can be found, to some extent, in other languages, but C++ is the only language, as far as I know (except maybe for D), which combines all of these features. And a quick look at the rules for ADL and overload resolution will reveal that much of the complexity of those rules are due to the fact that this is where all of these features clash together. For example, having templates without having ADL would be extremely limiting. Having no overloading would make C++ just about as terrible of a language as Java. CV-qualifications have a huge benefit, but they permeate the standard because they have to be factored in everywhere. Having namespaces and consistent scoping rules is a necessity for a modern professional-grade language, it is what distinguishes proper modern infrastructure languages from toy languages for academics and hobbyists.

If you look at the ADL or overload resolution rules found in other languages that support some similar features (e.g., overloading), you will find that the rules are much simpler, but they are only simpler because they lack most of the features that C++ offers, and therefore, only need to handle a few simple cases.

The point is, the complexity of the C++ language is, in my opinion, a natural consequence of the power that its feature-set offers, and such a powerful feature-set is required in certain domains, but it's complete overkill in others (e.g., smartphone apps).

One pet peeve that I have had with C++ with respect to this overloading and ADL stuff is that I really like the idea of considering non-static member functions as the same as free functions with an additional parameter (and vice versa), like a.add(b); is the same as add(a,b); (one single function, called either ways). This is realized to a limited extent in Python, and it is fully realized with overloaded operators in C++. I would really have liked to see this realized for all functions, not just operators. But this was proposed too late (because it's mostly useful for generic programming, which took a while to flourish), and never got put into the language, and now, it cannot really be put into it without breaking existing code or significantly complicating ADL / overloading rules.

The problem with C++ is that even after 5 years of reading and trying it, it is still (for most of us, anyway) impossible to figure it out,

Yeah, that's true. But that's true for almost any serious profession or skill. You can never "figure it all out", i.e., there is no way you can completely cover the knowledge-space nor keep all the information current and active in your brain. What you do is develop intuitions, and look them up frequently, when you are unsure. I very often look up basic C++ mechanisms or tricks that I happen to use less often. In other words, I often have moments like "oh, this doesn't feel right, I'll look it up", or "I think I remember there was a trick for that, I'll look it up". And if you are not doing this in your daily work (in whichever language in whichever domain), then it probably means that your work is just trivial and repetitive. This is really nothing specific to C++.

and I am talking not just the myriad exceptions and horrendous hacks absolutely required to make it a usable and useful language

In my experience, if you know C++ well (and I don't mean every detail of it, but just good practical experience with it), you don't really have to write any crazy hacks, ever. I don't recall the last time I wrote anything of dubious nature like that. When you learn to work with the type system and semantic rules, using them to your advantage, you can avoid being pinned in a corner and having to use a dirty hack to get yourself out of it. If you understand the rules and work with them, things can go exceptionally well, far more so than in any other language, because of the high level of robustness and behavioral guarantees.

But this is a bit of a chicken and egg problem, as in, you cannot write clean (no hacks) code without having a good grasp of the language, and you cannot get a good grasp of the language without having written a lot of code (and gained experience and intuitions). That's why, at first, it can seem like there is a lot of dirty hacks involved in programming C++, but most of these dirty hacks slowly disappear over the years, and you end up writing very clean and well-behaved code.

rules that apparently even the creator of the language doesn't fully understand himself

Well, you have to understand that Stroustrup is not the only creator, call it more the inceptor. There have been many people involved, from academia and industry from every side (users of C++, compiler writers, computer builders, etc.). And to be perfectly clear, I don't think that Stroustrup is personally involved in doing much coding these days. He has much more of an academic perspective over the language than some of his peers. I would not really expect him to have the kind of "hard-wiring" required to immediately recognize certain things that are odd, such as Vec<string, "hello">, which would immediately make an experience programmer jump. That's the difference between academic and practical knowledge. The academic perspective is useful to see the bigger picture and the understand the reasoning behind the mechanisms or techniques used. The practical perspective is about developing intuitions and practical tricks to guide you through the everyday tasks at the minimal expense (thinking/designing too much, or doing too much research on obscure rules). So, you shouldn't be so focused on the nitty gritty details when reading Stroustrup, and clearly, he didn't either.

I think the future will almost certainly be more a functional/lisp style language with close to zero syntax and low level access possible.

Functional languages are nice in theory and have to neat practical implications. I would say that they are a nice subject of study, and some of it can be applied. However, full-blown usage of pure functional languages will never reach wide-spread adoption. These languages just have way too much overhead (exponential overhead, actually) and they clash far too much with the underlying architectures, i.e., all computer architectures are fundamentally about mutable data and imperative execution, and trying to force a pure functional paradigm onto that is just horrible. They are a fun curiosity and an occasionally appropriate tool, but not a practical, comprehensive solution.

I just don't think that, when the older C/C++ generation retire, there will be enough replacement in the younger generation that knows the language sufficiently.

From what I gather, the average age of C++ programmers seem to be 25-35 or so, right now. That's just from the people I see as prominent figures (participate in forums, maintain libraries, contribute code, attend conferences, etc.). And it seems that the impression many old-timers are getting today is that the age average is just getting lower and lower. And people in their late twenties or early thirties are not retiring anytime soon. So, what you say might be true in 20-30 years or so, assuming the interest for C++ from the next generation immediately drops to zero right now.

but there is no way the younger generation will want to shoulder the burden of this unecessary complexity

It is not unecessary complexity. I can definitely tell you that the complexity of writing serious infrastructure code (e.g., servers, databases, big-data, analytics, simulators, operating systems, etc.) is far more serious than the complexity of a language like C++. So, you have to look at the big picture. If you want to be a serious infrastructure (or systems) programmer, you have to acquire some serious skills in both the application domain(s) and the programming language(s). And if an application domain and a particular language do not mesh well together, it is far more trouble (and loss of productivity) than learning a more complex language that actually works for the domain (has the right features, has enough power, etc.).

And if the younger generation is not willing to take the burden of learning to work with complex systems, then we are in trouble, regardless of application domain (coding or elsewhere). I am not afraid of this, there are always people who are interesting in seriously getting to know how complex systems work, and not just interested in using them.

Let's return to the template string argument.
First let's clarify exacty what I mean.
1) template can already have char * parameters
2) litteral string (as far as I understand the standard ... which is not much ...) have a fixed location in memory available at any time irrespective of the scope. For example:

char * my_global_string(){
    char * cstr="I am always available even when the function has been exited";
    return cstr;
}

Here the pointer returned by the my_global_string points to a valid cstr text even outside the function
3) therefore it would be extremely easy for a compiler designer to implement litteral cstr as template argument (that would decay to a char * parameter)
4) currently C++11 does not allow this (and apparently it is not simply a current compiler C++ non-conformance problem)
5) to make it even clearer:

template <char * P> struct demo;

char text[]="hello";

// since this is already allowed:

demo<text> dt;   // ok

// why not allow also this to make life of everyone easier?

demo<"hello"> dh; // error

Therefore the obvious question is:

Why hasn't this been yet implemented (in 20 years!) as this obvious extremely intuitive 'feature' would be quite useful in simplifying the source code and would (at a minimum) remove (another) useless and pointless limitation so typical of C++ ?

Is it planned for c++37? :-)

This problem is actually very straight forward. It is allowed to have pointers are template parameters. However, you can only instantiate the template with a pointer to something that has external linkage. So, your example works (with text) only works because text is a global variable with external storage. If you change its linkage to internal, it doesn't work:

template <char * P> struct demo { };

static char text[]="hello";

// since this is already allowed:
demo<text> dt;   // error, text has internal linkage

The problem with this is that things (variables, functions, etc.) that have external linkage have a program-wide address, i.e., an address that is resolvable during compilation because it will end up at some fixed address in the data section of the final program. This allows the compiler to form a consistent instantiation of the template.

When things have internal linkage or no linkage, there is no fixed address, in fact, there might not be an address at all (could be optimized away, or created on the stack). Therefore, there is no way to instantiate a template based on that non-existent address. When you have a literal string, like just "hello", it's a temporary literal with no linkage. This is why this thing cannot work, and will never work.

You have to understand that C++ is really close to the metal, and most of the limitations that may seem excessive are actually there because this is where the real world implementation issues collide with theoretically desirable features.

But I don't think that answers why you cannot do it.
Indeed here are 2 very simple way that would appear to work: at the point of the instantiation of a template containing a litteral string argument L either:
1) the compiler automatically insert the statement
char ANONYMOUS_OR_UNIQUE_RANDOM_NON_ACCESSIBLE_NAME[] = L
just before the instantiated class and instantiate the class replacing L with the name (which will decay to a char pointer) as in the preceding post
2) insert in the instantiated class the statement
static ANONYMOUS_OR_UNIQUE_RANDOM_NON_ACCESSIBLE_NAME[] = L
(with the corresponding definition following the class) and replace L with a pointer to that text wherever L was used in the class

So without changing anything inside the compiler logic you can do it. As a matter of fact the compiler could just do it as a simple pre-processing step !

This is no suprise since this is exactly what the hapless C++ programmer does manually when he wants to achieve the same effect in each situation !

Well, option 2 is certainly not possible. The problem is not with the instantiation of the class but with the identity of the instantiation of the class (or specialization). You cannot insert, within the class, the entity that defines its identity, it's a circular logic problem, similar to trying to create an object of an incomplete type.

Option 1 could technically be done, but there are still a number of problems with this. For one, allowing the hidden creation of an externally visible symbol is not something that would sit well with some people (not me personally, but some people wouldn't be happy about that).

Another important issue would be about this situation:

demo<"hello"> a;

demo<"hello"> b;

Are a and b of the same type? No. That's a surprising behavior that most novices wouldn't expect, and also the compiler cannot, in general, be required to diagnose this kind of a problem (even though it could emit a warning). In other words, this could easily be a source for a silent bug. Now, the programmer could fix this by doing this:

typedef demo<"hello"> demo_hello;

demo_hello a;
demo_hello b;

but is that really better than this:

char hello[] = "hello";

demo<hello> a;
demo<hello> b;

And also, with the typedef solution, you still have a problem when the type demo_hello is used in different translation units, because, again, their types will be different. And that leads to a violation of ODR (One Definition Rule), which the C++ standard is very adamant about upholding. Just to be clear, if there are multiple instantiations of a normal template, like int_vec<5>, from different translation units (if the compiler is not able to merge them), it is not a problem because those instantiations will be identical in every way. But multiple instantiations of demo_hello will be different because the char-pointer will be different in each instance, that's where the ODR violation comes from. This is, in general, the reason for not allowing pointers to things without external linkage, but obviously, it also persists if try to generate external linkage for literals (option 1).

There is no doubt that there could be a way to make it work, but at the end of the day, is it really worth it? The cost would be quite a bit of complications to be added to the standard in order to accommodate this, it would also mean requiring that compiler-vendors implement certain behavior they might not like to implement (and they really have the final word on whether features really make it or not), and it could also require specifying implementation details that are normally not within the realm of what the standard specifies (such as name mangling or linkage mechanisms). The benefit is gaining a feature that probably nobody would ever really use.

There is far more hope in the idea of allowing arbitrary literal types as template parameters (which would also solve the string literal issue), but as I said, even that is not very likely for similar technical reasons. And at least, with that feature, the cost would be less and the benefit would be far greater.

I hope you are starting to get a feel for how difficult it is to create a programming language like C++ and what kind of issues the C++ committee has to wrestle with.

No !
The implementation occurs at the preprocessing step.
Therefore none of your objections apply.
Witness (using possibility no 2 which is my favorite):

// initial source code
template <int qty, char * default_text>
struct init_vector{
    std::vector(qty, default_text);
};
init_vector<100, "hello"> v1;
init_vector<100, "hello"> v2;

/*
pre-processing step:
transform all templates where char * is litteral cstr to new source code
does not need to be done by a real compiler
can be done by anyone using external pre-processing routine acting on source code
*/

// result = source code that will be compiled is:

template<int qty >
struct init_vector_hello_12892734{
    std::vector(qty, hello_12892734);
    private: static const char * hello_12892734;
};
const char * init_vector_hello_12892734::hello_12892734 = "hello"; 

init_vector_hello_12892734<100> v1;
init_vector_hello_12892734<100> v2; // same type as v1 !

// compile

// You're done. Enjoy a simpler life !

And why stop here? Let's use the same mechanism to allow in-class initialisation of string such as:

struct at_last_we_can_now_do_it{
    static const char * important_text = "Enjoy an even more simple life !"
};

And to make it clearer to all and remove any possible template ambiguity, the syntax for a litteral cstr template parameter should logically be:
template <char[] text>
instead of
template <char * text>

The implementation occurs at the preprocessing step.

It doesn't matter when it occurs. Moreover, it cannot occur at the preprocessing step, because it requires semantic analysis, which occurs at the compilation step.

Anyways, what you demonstrated there is what compilers already do when instantiating templates. The addition of the static member is not really important, compilers have other mechanisms for that purpose. This is not the core of the issue at all, and the problems I mentioned still apply, especially the ODR violation!

Here is a more explicit illustration of the problem:

In demo.h:

#ifndef DEMO_H
#define DEMO_H

template <char* Str>
struct demo { /* some code */ };

void do_something(const demo<"hello">&);

#endif

In demo.cpp:

#include "demo.h"

void do_something(const demo<"hello">& p) {
  /* some code */
};

In main.cpp:

#include "demo.h"

int main() {
  demo<"hello"> d;
  do_something(d);
};

Now, the sticky question is: Is the "hello" string in demo.cpp the same as the "hello" string in main.cpp? If not, then the type demo<"hello"> as seen in main.cpp is not the same as the type demo<"hello"> seen in demo.cpp.

Consider this (stupid) piece of code:

In half_string.h:

#ifndef HALF_STRING_H
#define HALF_STRING_H

template <char* Str>
struct half_string { 
  char* midpoint;
  half_string() : midpoint(Str + std::strlen(Str) / 2 + 1) { };

  static const char * p_str;
};

template <char* Str>
const char * half_string<Str>::p_str = Str;

std::string get_first_half(const half_string<"hello">&);

#endif

In half_string.cpp:

#include "half_string.h"

std::string get_first_half(const half_string<"hello">& p) {
  return std::string(half_string<"hello">::p_str, p.midpoint);
};

And in main.cpp:

#include "half_string.h"

int main() {
  half_string<"hello"> hs;
  std::string s = get_first_half(hs); // <-- CRASH!
  return 0;
};

Your option 2 is not going to solve this problem. This will be a problem as long as the string that is passed to the template does not have external linkage.

As for option 1, you still have a problem. Assuming the compiler generates a string with external linkage for the literal, you would end up with this:

In half_string.h:

#ifndef HALF_STRING_H
#define HALF_STRING_H

template <char* Str>
struct half_string { 
  char* midpoint;
  half_string() : midpoint(Str + std::strlen(Str) / 2 + 1) { };

  static const char * p_str;
};

template <char* Str>
const char * half_string<Str>::p_str = Str;

std::string get_first_half(const half_string<"hello">&);

#endif

In half_string.cpp:

#include "half_string.h"

char * __half_string__hello__str = "hello";

std::string get_first_half(const half_string< __half_string__hello__str >& p) {
  return std::string(half_string<"hello">::p_str, p.midpoint);
};

And in main.cpp:

#include "half_string.h"

char * __half_string__hello__str = "hello"; // ERROR Multiple definitions!

int main() {
  half_string<__half_string__hello__str> hs;
  std::string s = get_first_half(hs); // <-- CRASH!
  return 0;
};

The compiler cannot determine where it needs to insert the string with external linkage because the compiler only looks at a single translation unit at a time. It simply cannot know that, leading to inevitable ODR violations. And if the compiler generated some random symbol to guarantee uniqueness of the definitions, you would be stuck with a type mismatch between the two instantiations because their pointers would be different (two different symbols).

So, at the end of the day, you need the programmer to create a string with external linkage and to put its definition in an appropriate (and unique) translation unit, because the programmer is the only actor (programmer vs. compiler) who can determine this, the compiler cannot. This is why this requirement exists.

Let's simplify by (for the moment) just discussing your preferred case (the 2nd one)

The problem is that your are not following my "new syntax rules". Had you followed them, the program leads to the answer "hel" which hopefully is what you had in mind.

The new syntax requires that you pass a character litteral in the form of an array

template <char[] text> struct A{}; // OK will accept new "Litteral text" char array

template <char * text> struct B{}; // old style accepts ptr but *not* "Litteral text"

// so that you can use it like this

A<"hello"> a; // behaves the new way

// or

char * pc;
B<pc> b; // will behave like the current standard only

// but you cannot do this:
char * pc
A<pc> a; // wrong: new syntax requires char array not char *

Therefore line 6 of your main file would be a compile error for the new syntax.

If done correctly, it works as expected (on g++), does not crash and returns "hel"

half_string.h
 #pragma once
 #include <string>
 #include <cstring>

 template <int>
 struct half_string {
        char* midpoint;
        half_string() : midpoint(Str + std::strlen(Str) / 2 + 1) { };
        static const char * p_str;
        static const char * Str;
        static const char * half_string_hello;
 };

 std::string get_first_half(const half_string<"hello">&);
half_string.cpp
  #include "half_string.h"
  template <int dummy>
  const char * half_string<dummy>::str = "hello";

  template<int dummy>
  const char * half_string<dummy>::half_string_hello = "hello"

  template<int dummy>
  const char * half_string<dummy>::p_str = str;


  std::string get_first_half(const half_string< 0 >& p) {
    return std::string(half_string< 0 >::p_str, p.midpoint);
  };
main.cpp
    #include "half_string.h"
    #include <iostream>


    /* next line not relevant
    char * __half_string__hello__str = "hello"; 
    */

    int main() {
        half_string<0> hs;
        std::string s = get_first_half(hs); // <-- NO crash
        std::cout << "Your desired result is here: "<< s.c_str() << std::endl;
        return 0;
    };

A few comments:
1) this would be the code generated by the pre-processor
2) the dummy int in the template is that because your template only had a single char[] argument (that gets removed by the pre-processor algorithm) you need to have some dummy argument for it to remain a template for the sake of this demo
3) my definition of a a pre-processor is a function that takes in text or source code and return source code while a compiler takes in source code and return non-text intermediate (or final) machine-readable output

Oups!
I actually now have come to better understand your underlying point.
Indeed, there are some problem when using multiple compilation unit.
So the solution would probably be that when the compiler encounters a template cstr array argument it puts it in a special area of string storage in its internal representation.
At time of linkage with other compilation unit, all those special storage area are merged together so that any duplicated cstr gets a single final storage area/address in the final .exe.
By keeping a separate area for those strings, the compilers avoids mixing them with other strings which would not be affected by this.

So the solution would probably be that when the compiler encounters a template cstr array argument it puts it in a special area of string storage in its internal representation.
At time of linkage with other compilation unit, all those special storage area are merged together so that any duplicated cstr gets a single final storage area/address in the final .exe.
By keeping a separate area for those strings, the compilers avoids mixing them with other strings which would not be affected by this.

Yes. That is the solution, in general. This is pretty much what the compiler does for integral template parameters (e.g., int). The point is that an integral type (like int) can be dealt with at compile-time, meaning that the compiler can compare integer values to determine that they are equal, and it can also create a hash or some other method to incorporate the integer value into the name-mangling of the instantiated template. So, when you have some_class<10> in one translation unit and some_class<10> in another, they will both resolve to the same instantiation and therefore can be merged or otherwise considered as the same type at link-time.

If string literals could be treated by the compiler the same way as integral constants, then the compiler could do the same. However, types like char* or char[] (which are identical, by the way, if there is immediate initialization) have the issue that they could also just point to a string with external linkage, instead of a literal. But, if you used an alternative to that, like constexpr std::string, which would have compile-time value semantics (we call that a "literal type"), then the compiler could, given some additional assumptions, be able to deal with it. This is rather difficult to achieve for strings, because writing a constexpr string is pretty hard (but possible), and it would also imply a constexpr comparison function and a constexpr hashing function. Now, the compiler could have a built-in constexpr string implementation (or one in the standard library), which would need to be fully specified in the standard (or at least, in the ABI).

So, assuming we get support for arbitrary literal types, and that we get a literal string type similar to StrWrap or str_const but with added functionality, then this could be made to work. The first is, as I said, part of the things that have been proposed (not sure of its status though), and the second could be a library component (standard or not) assuming it can be done with sufficient capabilities (e.g., hashing).

Thanks for the interesting explanation.
To complete this thread, I have a final 2 questions about static initialisations.

1) I have a fct similar to this:

std::string get_description(int i){
    static const std::map<int, std::string> mp = { {23, "hello"}, {2, "bonjour"}, {34,"hola"}};
    return mp[i];
}

Now if I have understood Stroustrup correctly (section 15.4.1 p 442) (and shall we add: if this is not another error in this book...) this is 100% thread-safe. Is this true? In particular since the static will be initialise on first use of the function, don't we run the risk that 2 different threads will first use it at the same time?

2) I have read elsewhere (C++ FAQ, 2nd edition, section 16.04, p 220) that static data member are guaranteed to be initialized before the first call to any function F within the same source file as the static data's definition but only if F is non-inline

What is the underlying logic behind that difference of treatment here between the inline and non-inline function?

don't we run the risk that 2 different threads will first use it at the same time?

That is correct. Local static variables are thread-safe. The standard requires that some form of synchronization is used to guarantee that the object is initialized exactly once. I covered this is a thread a little while back, see here.

What is the underlying logic behind that difference of treatment here between the inline and non-inline function?

A function that is defined in a particular translation unit will become available to be called (after loading the program) when all the static data (e.g., global variables or static data members) defined in the same translation unit are also loaded and initialized. For functions that have been inlined by the compiler, their definitions actually appear where they are called (that's what inlining means), which could be in a different translation unit, and therefore, it cannot be guaranteed that the static data from another translation unit has already been initialized or not. That's the difference. This is just another case of the static initialization order fiasco.

It completes this very interesting thread.
Many thanks again

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.