grumpier 149 Posting Whiz in Training

To grumpier:
So we have two possible solution:
1. std::vector: variable declared size - OK, element access - slow.
2. std::valarray: variable declared size - OK, element access - fast.
Of course, it's not a good idea to advocate a simple and effective solution when we have slow but usual (sooner fashionable) one...

Your assertions of relative speed are incorrect.

vijayan121 gave a good summary of the history of valarray. In theory, there are some highly specialised circumstances in which valarray operations can be quicker than vector operations, but that relies on specific support by the compiler and usage of hardware features (in vector processors) which were (almost) never employed in practice.

Unless I missed something he wasn't commenting on C arrays, but std::valarray versus std::vectors. There is little doubt that C array access is fastest.

As a matter of fact, there is. Element access of std::vector (and std::valarray) can be optimised by the compiler (inlining of the operator[] functions, etc) and decays effectively to a C array element access (or, equivalently, a pointer dereference). This relies on behaviour of the compiler (specifically, how it does optimisation) but good quality modern compilers do this by default.

grumpier 149 Posting Whiz in Training

On the other hand, Arkm, don't advocate valarray without a particular reason. The original post provided no particular reason to prefer valarray over a vector. The only requirement stated is ability to represent an array of float with length unknown at compile time. Both valarray and vector meet that requirement, and do it equally well.

You happen to be incorrect about one attribute of valarray: it can be resized. The impact of resize vs no resizing is equivalent for both valarray and vector.

The advantages of valarray relate to it being specifically optimised for repetitive operations on an array of numeric values. If there is no need to do such operations (and the original post identified no such need) vector is a good general-purpose option.

As a rough rule, it is not a good idea to advocate a specialised solution unless you know that solution is relevant to the problem at hand. valarray is a specialised solution.

grumpier 149 Posting Whiz in Training

That code is too big for anyone to bother looking at. Try eliminating code to produce a small but complete code sample that exhibits your problem. If you get lucky, you will find/fix the problem in the process of producing that small code sample. If not, people will be more willing and able to help.

Odds are, the cause is an invalid operation on a pointer in code executed before the line where the "heap corruption" is reported. Invalid operations with a pointer include dereferencing a pointer that doesn't actually point at anything valid (eg a NULL, or an uninitialised pointer) or falling off the end of an array (eg accessing element 10 in a 5-element array).

grumpier 149 Posting Whiz in Training

The reason is that Base is a private base class of Derived. Implicit conversion from "pointer to Derived" to a "pointer to Base" relies on public inheritance. Code using Derived (i.e. your _tmain() function does not have access to private members or bases - that is what "private" means.

Your explicit conversion Base *p = (Base *)&d; just forces the conversion to happen. It would also force the issue if Derived did not inherit from Base at all.

grumpier 149 Posting Whiz in Training

There are a few factors that conspire to make you have less disk available than the labeled disk size.

Firstly, marketeers of hard disks conspire to make you think you're getting more than you are, by defining a GB as a million bytes rather than as 1024*1024*1024 (which is how a GB is defined technically) which means sizes are technically overstated by about 7%.

Second, there is normally overhead on disk related to basic management of the disk. This means that more space is allocated to a file than is actually used by it. If you have lots of little files, they can actually consume considerably more disk space than their reported size would suggest. Different disk formatting techniques can change this (eg by reducing cluster size) but the basic fact of life is that disk contents need to be logically indexed in order to be able to save and retrieve data, and indexing schemes (eg file allocation tables) consume disk space.

Third, and often the most significant if you don't have lots of little files on your disk, most operating systems allocate hidden files that can consume large amounts of disk space. Windows certainly does this: the swap files on most systems, by default, is larger than the RAM installed on your system and is on the same disk that the operating system is installed on. So, if you have 1GB of RAM, you typically have at least 1GB of disk space allocated to the swap …

grumpier 149 Posting Whiz in Training

i read that a while back and didnt like it; it seems like all the guy is doing is describing himself and saying everyone else isnt a real programmer.

And you're suggesting that's unusual????

Virtually every programmer I've ever known considers he or she is the only real programmer.