BOOL is basically pretty intuitive..
my question:

#include "stdafx.h"
#include <iostream>

using namespace std;

int _tmain(int argc, _TCHAR* argv[])
{	double cost, finalPrice;
	bool type;

	type=true; //taxable
	cout << "Enter price: ";
	cin >> cost;

	if (type)
		finalPrice = cost*1.07 ;
	else
		finalPrice = cost;
	cout << "Final price: " << finalPrice << "\n";

	return 0;
}

In the type=true
how does the computer knows its true?? what makes it true??

all in <bool type>..<type> is a created variable right?

0 to 255? I see...

Anything bool can do, char or byte can do. It only exists as a courtesy to programmers so that when we can use the keywords true or false. What is happening behind the scenes is that when we assign something a value of false, we are actually giving it a value of zero. And when it's assigned true, it's given a value of 1 (I believe).

So when you have:

bool something = true)
if (something == true) //some code here

when compiled, the IF statement becomes something like:

bne something, endfunc

bne stands for branch if not equal to zero. so if something is true, it's not zero.
so skip down in the assembly code to some label called endfunc, else continue executing code starting at the next line

similarly if you have:

if (something == false)

then you have

bnez something, endofprogram

Here bnez stands for branch if not equal to zero. So if something is not equal to zero, then it is true, therefore skip ahead to a label in the code called endofprogram. Else continue executing code at the next line.

Hid my irony too well, I assume.
Sorry for that.

I was making fun of... uh oh
*whistle*

Edited 5 Years Ago by Caligulaminus: to to too

I guess you could try:

int main()
{
    bool t = true;
    bool f = false;

    std::cout << "true  = " << t << std::endl;
    std::cout << "false = " << f << std::endl;

    return 0;
}

For me this prints out

true  = 1
false = 0

But, I guess the value of true could be compiler dependent.

A bool can be thought of as an int with a range 0f 0 to 255. As such a 0 means false and anything else is true.

This is wrong. You shouldn't think of boolean data types as a integer with a range of 0-255. A boolean datatype usually consist of 1-bit. The bit can be 0 or 1. Zero equals false, and One equals true. So think of boolean as only true or false value. By design, the value 0 represents false, and anything else represents true.

Let me clear out some confusion about "bool". Strictly speaking, a bool value can be true or false, so, it can only take two values. Practically speaking, any integer value is implicitly converted to bool as if you wrote bool(value != 0) , meaning that any non-zero value is interpreted as a true bool value.

Storage-wise, a bool requires 1 bit of storage (0: false, 1: true) (from C++11 standard clause 9.6/3), but, because bytes are not divisible, a single bool variable will have to occupy a full byte.

The C++11 standard, at 4.5/6, says:

A prvalue of type bool can be converted to a prvalue of type int, with false becoming zero and true becoming one.

And section 4.12/1 says explicitly that any conversion of an integral or pointer type to a bool type will result in false if the value is exactly zero, and true otherwise (also, null_ptr converts to false).

So, bool is not just a kind-of renamed unsigned char type and it's not just provided for convenience to the user, it is a type of its own right and it can be stored efficiently, for example, as part of a bitfield or specializations of STL containers such as std::vector which will result in requiring only 1 bit per bool variable.

Finally, bool is a very special type of integral type because pretty much any arithmetic operation on it has undefined behavior or is invalid.

That's about all you need to know about the bool type.

>> for example, as part of a bitfield or specializations of STL containers such as std::vector which will result in requiring only 1 bit per bool variable

In my opinion, I think this was a bad idea implementing a specialized version for vector of bools.

>>In my opinion, I think this was a bad idea implementing a specialized version for vector of bools.

Well, the specialization of vector for the bool type is and continues to be a very controversial issue, with renowned experts on either sides. Personally, I haven't looked into it enough to form my own opinion. But I do appreciate the fact that the new standard is more explicit about noting that the general vector class template is not the same as the vector<bool> class and that they should be regarded as two distinct containers.

Why is that firstPerson?

I draw my reasons from experience and not necessarily from its design. For example, when I was working under my professor, I had to read in a DNA matrix, which consisted on 0's and 1's. So naturally, I thought to use vector of bools since it was natural thinking for me. But as I start implementing the program, I realize that using vector of bools instead of just vector of unsigned chars was a bad idea. I had difficulty using the vector-of-boolean interface. For example, you would expect operator[] to return a l-value reference where you could assign a r-value to it. But it turns out that std::vector<bool> uses some proxy where it doesn't work as you think it does. At the end I seen really no gain by using vector-of-bool. Sure memory was optimized, but for my that wasn't really a problem. I can tell you that I got more than about 50% performance increase by using vector of unsigned char than vector of bools. Anyways, from now on, I would rather not use vector of booleans, unless someone can show a good reason in a reasonable situation where it would be better than using a regular vector or even std::bitset.

>>I had difficulty using the vector-of-boolean interface.

Exactly, that's the main argument that people make against the vector-of-bool specialization. Let me give a bit more background on this problem. The original decision to include a specialization was from the memory optimization point-of-view. But people soon realized that, even though the vector-of-bool provides an interface which essentially looks like the general vector template, it makes some hidden assumptions (like the proxy-object for lvalue indexing) that essentially break the interface. It was a simple mistake, nobody's perfect, including the architects of the STL. For example, in the case of indexing, they were satisfied by the fact that the proxy-object could act as a bool& variable in a statement like vect_bool[i] = true; , but the problem is that you cannot bind a bool& variable to this lvalue obtained from the indexing (including sending it to a function that takes a bool& parameter), i.e., you cannot do bool& b_ref = vect_bool[i]; .

The gist of the debate is this:

On one side, the more purist side, people argue that, by definition, the reference that is returned by the non-const indexing of any STL container is not required to be T& , it is only required to be of the type std::vector<T>::reference which may or may not be T& . Just like random-access iterator is not required to be T* (but it could), the only requirement is that it acts in a way that is similar to a T* . Technically speaking, they are right. Just like you cannot assume that std::vector<T>::iterator is a pointer T* , you are not supposed to assume that std::vector<T>::reference is a reference T& . The purist argument is also that you can easily deal with the various possible iterators (through templates) and that good C++ code should do the same with references. The problem is that iterator types are well defined in terms of valid expressions (Concepts) and iterator-traits, but references are not, and in the absence of specification about the expected behaviour of the std::vector<T>::reference type, the user is forced to assume that it is just a typedef for T& .

On the other side, the more realist side, people simply point out several reasons why you cannot expect the average (or even expert) C++ programmer to be that diligent and strictly obey the standard specifications (or lack thereof, needing to read between the lines). Mostly, people on this side of the argument argue that the std::vector<bool> specialization should not be a specialization of std::vector<T> , but rather a separate class on its own. That is, std::vector<bool> should behave exactly as any other vector, and the memory-optimized "bitfield" implementation of a vector-of-bool should have its own separate class and interface. Ironically, although this is a very valid argument coming from realism, its implementation is not realistic.

Whatever the point of view, inertia remains. You can't really change such a fundamental thing even if there are very good reasons to do so. There are no nice ways to smoothly deprecate a template specialization, and you can't make such a change to such an important programming language without causing a commotion and breaking legacy code. So, the lesser-evil solution was to make a special separate specification for the vector-of-bool to at least make the special distinctions w.r.t. the vector-interface very well documented.

Comments
Great explanation

On one side, the more purist side, people argue that, by definition, the reference that is returned by the non-const indexing of any STL container is not required to be T& , it is only required to be of the type std::vector<T>::reference which may or may not be T& . Just like random-access iterator is not required to be T* (but it could), the only requirement is that it acts in a way that is similar to a T* . Technically speaking, they are right. Just like you cannot assume that std::vector<T>::iterator is a pointer T* , you are not supposed to assume that std::vector<T>::reference is a reference T& . The purist argument is also that you can easily deal with the various possible iterators (through templates) and that good C++ code should do the same with references. The problem is that iterator types are well defined in terms of valid expressions (Concepts) and iterator-traits, but references are not, and in the absence of specification about the expected behaviour of the std::vector<T>::reference type, the user is forced to assume that it is just a typedef for T& .

That, I guess is technically correct. I haven't thought about it that way. Thanks for the realization.

This article has been dead for over six months. Start a new discussion instead.