mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm not sure I understand what you mean. I guess you are asking about how to make a game that has the same level of graphics quality from a 10-year-old game, but on today's hardware?

It is certainly possible. To understand this, you have to know that there are two things that make newer games have better graphics: more advanced features on graphics cards and higher detail for the artwork (textures, 3D models, etc.). The fact that there are more advanced features available on modern graphics card does not, in general, prevent you from not using them. Most (if not all) of the original basic features they had available in 2001 are still available today, it's just that people don't use them as much because the better options are now available on all modern computers and run fast enough. It's certainly possible to run the same old basic cheap-looking features they used to be limited to a decade ago. When I talk about features like that, I mean things like: pixel shaders (current) vs. fixed rendering pipeline (old); quadratic / tri-linear / anti-aliased texture filtering (current) vs. linear / near texture filtering (old); multi-texturing (current) vs. single textures (old); and so on.

As for the artwork, there is no problem is using lower resolution textures, coarser models, more shallow scenery, etc.. The only reason why higher quality artwork is used right now is because the graphics cards have enough memory to deal with them, when they previously couldn't.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Contrary to popular belief, the debate between Christopher Columbus and his peers was not about whether the Earth was round or flat, they all agreed it was round, but they disagreed on the size of it and the size of the Eurasia. The consensus on both sizes of the Earth was pretty accurate to the real figures, and since they didn't know about the existence of America, they rightly figured that the trip across to Asia would be impossible. Imagine if there was no America, crossing the Altantic, and then, the Pacific in one trip, with no re-supply point, would be crazy, especially with the means of the time. Columbus used the wrong measurement units and thought the Earth was 75% smaller, and on top of that, he had re-calculated, very wrongly so, the size of Eurasia to be much larger than it really is. That's why he thought the trip was possible because it put the east coast of Asia just a bit east of the real american east coast, and it's also why he naturally assumed he had landed in India when he reached the Caribbeans, because with his calculations, that's almost exactly where he expected to be. This whole myth of "everyone believed the Earth was flat until Columbus proved otherwise" comes from a fictional biography by Washington Irving.

And on a related topic, contrary to popular belief, people stopped believing in a flat Earth a long time ago (except for a resurgence by some …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

100,000 is too much for Indians.

Yeah, well that was a few years ago. Today, at least here Canada, you can get a better laptop than mine for about 40,000 rupees (700CAD), I just checked, as an example, an Acer 15.4 inch screen, i5-4200, 1TB HDD, 10GB RAM, Win8.1, Radeon R7 M265 (2GB) graphics, and it's 700CAD. I obviously don't know how prices are in India for laptops, but I imagine it's of the same order of magnitude.

What are you talking about in "Quality"?

I don't know. When it comes to laptops, I think that the construction quality is very important, because you carry it around and knock it around a bit, and it's important that it's well-built. I always found Lenovos and Sonys to seem a bit wonky in their construction. This is also why I like both Acer and Toshiba. And I'm a bit on the fence with Dell laptops. But this is, by no means, a professional opinion. When it comes to internal components, all companies have the same stuff, mostly, except that some try to cut cost by putting in cheap components or shabby assembly.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When it comes to laptops, I have always found that Toshiba or Acer are pretty safe bets. I think HP is OK too. I've never trusted the quality of Lenovo, Sony or Dell, for laptops. That's just my 2 cents.

Btw, your specs are very close to what I have in my current Acer laptop (i7 2nd gen, 1.5 TB HDD, 8 GB RAM, 1 GB graphics, Win7), that I bought a couple of years back (for about the equivalent of 100,000 rupees), and I'm very happy with it.

Gribouillis commented: +1 for acer aspire with i7 +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are several Linux distributions that focus on security of various kinds. I'm not sure exactly which would be the most appropriate for you. They take different focuses. Some aim at anonymity (e.g., via Tor or I2P, like in Tails Linux). Some aim at preserving the integrity of the system, like Immunix. Some aim at running secure servers, like Fedora, CentOS and RHEL.

And when it comes to securing a system, ironically, the NSA can be a useful source of information. Using SELinux-enabled systems is probably a good idea. You might be paranoid about NSA backdoors, maybe justifiably so, but I think SELinux largely predates the start of NSA's criminal activities, and it's mostly a "way to do things" (protocol) as opposed to an actual implementation (AFAIK), so, the implementations of it are probably trustworthy.

It sounds like what you want is mainly to be able to store important information. For that purpose, you need either full disk encryption (e.g., truecrypt or dm-crypt) or file-system encryption (e.g., EncFS or eCryptFS), or both. Personally, I'm not convinced that full disk encryption is really that good because if someone accesses your system (physically or remotely), then having user or root access to the system implies being able to read / write data on the encrypted drive, at least, that's how I understand it. I guess the point is that securely storing data, to me, implies that the data is never left unencrypted (or readable) for any period of …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are mainly two easy ways to "parallelize" the operations in a for-loop like this.

One option is to use SSE(2-3) instruction sets. SSE instructions sets can basically perform multiple (4) floating point operations at once (in one instruction). This is something that the compiler can do automatically. If you are using GCC (or ICC), these are the appropriate compilation flags:

-mfpmath=sse -Ofast -march=native -funroll-loops

If you add those to your compilation command, the compiler should optimize more heavily for your current architecture (your computer), using SSE instructions, and unrolling for-loops to further optimize things.

Another easy option for parallelizing code is to use OpenMP. OpenMP allows you to tell the compiler to create multiple threads, each executing one chunk of the overall for-loop, all in parallel. It requires a few bits of mark-ups on your code, but it's easy. Here is a parallel for-loop that does a logarithm on an array using 4 threads:

void do_log_for_loop_omp_sse(float* arr, int n) {
  #pragma omp parallel num_threads(4)
  {
    #pragma omp for
    for(int i = 0; i < n; ++i) 
      arr[i] = std::log(arr[i]);
  };
};

When you compile code that uses openMP on GCC, you need to provide the command-line option -fopenmp to enable this.

Also note that you can easily combine the two methods by using openmp in your code, and telling the compiler to use SSE instructions.

Just for fun, I wrote a program that measures the time for all these four methods (for 3000 …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are two different matters at play here. There is the mathematical concepts of column vectors versus row vectors. And there is the memory layout issue of storing matrices in column-major or row-major. These are two completely separate issues. In mathematics, we almost exclusively use column vectors. In mathematics, memory layouts do not exist, since mathematics is abstract, and memory layouts are an implementation detail / choice when you put the abstract math into real code.

So, to make things clear, in mathematics, we have this:

|X X X T|   |X|
|X X X T| x |Y|  =  M x V
|X X X T|   |Z|
|0 0 0 1|   |1|

which performs the rotation and translation of the 3D vector (X,Y,Z). If you transpose the entire expression, you get an equivalent expression:

(M x V)^T = V^T x M^T = | X Y Z 1| x |X X X 0|
                                     |X X X 0|
                                     |X X X 0|
                                     |T T T 1|

(where all the X's are transposed too). The above is how the mathematical conventions of row-vectors and column-vectors relate to each other. In other words, using row-vectors just means that you transpose everything. But like I said, in mathematics, we use, almost exclusively, column-vectors. And I just noticed that Direct3D documentation uses row-vectors... (sigh).. (rolleyes)..

In OpenGL, the matrices are stored in column-major ordering, meaning that the memory index of each element of the matrix is as follows:

| 0 …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is really important is to resize by an amount that is proportional to the current amout in the array. Typical factors vary from 1.25 to 2. The reason why it is important to use a growth factor like that is because it makes the cost of allocation (and copying) have an amortized constant cost.

Here is how it works. Let's say the factor is 2, and you currently have N (valid) elements just after a resizing to a total capacity of 2N. This means that you can add N elements more to the array before you have to resize again, at which point you will have to copy the 2N valid elements from the old memory to the newly allocated memory. That copy will have a O(2N) cost, but you did it once after N additions to the array, meaning you did it with 1/N frequency, and so, the average cost is O(2N * 1/N) = O(2), which is what we call an amortized constant cost. If the factor was 1.5, by the same calculation (with frequency 1/(0.5*N)), you get an amortized cost of O(1.5*N * 1/(0.5*N)) = O(3). So, for any factor F, the cost is O(F/(F-1)). That's where you have a trade-off, with a larger growth factor the amortized cost is lower (limit is 1), but the excess memory is greater (on average, the unused capacity is N*(F-1)/2). A typical choice is 1.5, or around that. This is the standard why to handle a dynamic-sized …

StuXYZ commented: great post +9
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So, if the markdown is rendered by a stock version of PHPMarkdown, and that the mishandling of the code blocks in lists and quotes comes from PHPMarkdown, then it would seem that the issue should be reported there.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that your operator== function is defined in the header, which means that it will be compiled for each cpp file that includes it, leading to multiple definition errors (see One Definition Rule). What you need to do is declare the function in the header and implement it in the cpp file:

In Vector.h:

//..

bool operator==(const vector& v1, const vector& v2);

//..

In the Vector.cpp:

// ...

bool operator==(const vector& v1, const vector& v2){
    if ((v1.GetX() == v2.GetX()) && 
        (v1.GetY() == v2.GetY()) &&
        (v1.GetZ() == v2.GetZ()))
        return true;
    else
        return false;
}

// ...

Also, notice that I used pass-by-reference instead of values for the function parameters.

Also, you are playing with fire with your code. You should never do using namespace std; in a header file, and you should not create a class called vector in the global namespace as well. This will cause a conflict with the standard vector class template.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree. The reason documentation is often very poor is due to the boredom factor. Writing documentation like overviews, examples and tutorials is just extremely boring, and nobody wants to do it. At least, that's the only reason why my library is not very well documented.

At least with the fairly well-established practice of interface commenting and the automatic generation of reference documentation (e.g., doxygen), there is at least a decent start, which is better than nothing, but not that easy to navigate either.

A lot of what actually makes a code-base easy to understand is when it relies on good and well-established coding practices. When there are no awkward design choices or hidden surprises in the behavior, it's a lot easier to deal with. These practices include things like flat class hierarchies, value-semantics, hidden polymorphism (i.e., non-virtual interfaces), no side-effects within functions, const-correctness, just to name a few.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is indeed something of a problem. It is always difficult to jump into a new and large code-base. You should concentrate on any available overview documentation (e.g., explaining how it works or is structured), tutorials (e.g., showing how to make mods), and reference documentation (e.g., class references).

If you need to really dig into the source code of the library / application, then you need to find a relevant starting point. This could be something that is similar to what you want to do or modify. When you find a decent starting point, just look it up and try to understand what it does and how it works, and work your way out from there, by looking up related code (e.g., that is called within that code you are looking at, etc.). The "find in files" is definitely a central tool when exploring a code-base because it gives you an instant snapshot of where certain things are used and declared. For example, if you want to modify the behavior of class A, then there is a good chance that you will want to look at all the places where class A is used so that you can assess if your modified version will work or if you need to modify other things.

There is no doubt that doing this is hard and takes a lot of patience. I've done this several times with different code bases (some small ( < 50,000 LOCs ), some big ( > 1,000,000 LOCs …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that you are looking in the wrong places. The reason why you cannot execute files on a USB is because the automatic mounting of USB drives in Ubuntu is somehow set to read-write permissions, but no execute permissions. Basically, the mount permissions are a mask that apply recursively to everything on the mount folder. So, you need to mount the USB drive with executable permission (rwx) if you want executable files in it to be executable when the USB drive is mounted.

This is a classic problem and it has been around for years, and it baffles me why the Ubuntu team (or anyone related to the automount feature) has not yet gotten around to adding a GUI setting somewhere for the user to change the default mounting permissions for external / USB drives. And I think rubberman is right that the automount uses the antiquated fuse-ntfs driver, which mounts ntfs without executable permission by default (or maybe not at all).

Here are some solutions:

1) Edit your fstab file for that USB drive entry. I use this for ntfs drives:

UUID=<UUID of USB drive> /media/<mount-folder>                      ntfs-3g    permissions,uid=1000,gid=1000,dmask=022,fmask=022        0       0

and you can get the UUID of your USB drive with the command $ sudo blkid.

2) Mount the USB drive manually with the ntfs-3g option. You can just do this:

$ sudo mkdir /media/<mount-folder>
$ sudo mount -t ntfs-3g /dev/sdb1 /media/<mount-folder>

where /dev/sdb1 should be replaced by whatever your device name is, you can …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The old town of Quebec city:

f949bde2420103f3d43e1defb39af51b

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The main thing that determines longevity is inertia. The only reason people still use Fortran or Cobol is because of inertia. To some extent, C as well, although it is often a practical (minimal) language for small tasks.

Therefore, the languages that are likely to live for at least a couple of decades beyond today are those languages with most weight today, which is essentially C, C++ and Java, and PHP and Python (in their application domains), anything else is somewhat precarious. Also, languages that are tied to a specific company, instead of governed by an open standard, will always be dependent on that companies continued market share and willingness to support the language; this includes, for example, Objective-C (Apple), Go (Google), and .NET (Microsoft). But, of course, you already know this, because you depended on VB (classic)! There is a reason (beyond the technical ones) why the classic paradigm with VB applications was to make the VB code just a thin front-end (GUI) that called the "real" back-end C++ code, because VB could killed anyday (as it was) and that would only mean having to port / re-write some trivial front-end code, while protecting (in time) the real important, "hardcore" code (back-end) that would be very tricky to port.

Microsoft forced the move to NET by deprecating both VB Classic and VC++. I think little of NET as it bloated big time IMO.

It's not true that MS deprecated VC++. In fact, lately, there has been more …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with rubberman that you don't really need to know assembly these days for most tasks. There are still some dark corners in which you will find assembly code, which mostly include very low-level pieces of things like operating systems or compilers, very critical functions that need nano-optimization (i.e., simple functions that are called so often that every clock cycle counts), and very tiny hardware platforms that don't support anything else (e.g., programmable integrated circuits (PICs), which typically just run a simple input-output mapping via a few lines of assembly code). The point is, today, if you are writing assembly, you are probably writing a very short piece of code for a very specific purpose. People simply don't write anything "big" in assembly, and frankly, I doubt anyone ever did, even back in the day (it's just that small systems were the norm back then).

However, it is very common in the programming field that people know how to read assembly, more or less. As a programmer, especially in some performance-critical areas, you occasionally encounter assembly code, but as a product of compilation. For example, you write a program and you want to know what assembly code the compiler produces when compiling it (which you can get with the -S compiler option). This can be useful to see how the code is optimized, if you need to change anything to get it to be more efficient once compiled, etc.. Then, there is also the occasional debugging tasks that involve …

rubberman commented: Good post. I have written boot loaders for x86 RT systems - not simple, but interesting. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is intelligence anyway,.. Technically, many systems are intelligent to a degree, and often in very narrow bands.

That is clearly one of the core questions that people have been trying to answer, and there is definitely no clear line separating what we would recognize as real "intelligence" and what is simply a smart solution or sophisticated program. There seem to be a few critical components that are the distinguishing factors: high-level reasoning, learning and situational awareness. I think that any system that lack all of these cannot really be considered AI, and one that has all three is definitely very close to being really "intelligent".

In the department of high-level reasoning (a.k.a. "cognitive science"), the areas of research that are being pursued there are things like probabilistic computing (and theory), Bayesian inference, Markov decision processes (and POMDP), game theory, and related areas. The emerging consensus right now is that approximation is good, fuzziness is good. Reasoning means understanding what is going on and predicting what will happen (possibly, based on one's own decisions), and doing that exactly is impossible (intractable), even we (humans) don't do that. This is why probabilistic approaches are much more powerful, because you can quickly compute a most likely guess, and some rough measure of how uncertain that guess is, and then base your decisions on that. You can see evidence of that with Watson, as he (it?) always answers with …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, you can either define the operator as a member of the class or as a free function.

As a member function, you do it as follows:

//A class for a number of bank accounts.
class BankAccount
    {
    friend ostream& operator<<(ostream&, const BankAccount&);
    friend istream& operator>>(istream&, BankAccount&);
    private:
        int accountNum;
        double accountBal;
    public:
        BankAccount(int = 0,double = 0.0);
        void enterAccountData();
        void displayAccounts() const;
        void setAccts(int,double);
        int getAcctNum() const;
        double getAcctBal() const;

        // Add the less-than operator:
        bool operator<(const BankAccount& rhs) const;
    };

//..

bool BankAccount::operator<(const BankAccount& rhs) const {
  // insert code here, to compare 'this' with 'rhs'
  // for example, this:
  return this->accountNum < rhs.accountNum;
};

And as a normal (free) function, you would do:

//A class for a number of bank accounts.
class BankAccount
    {
    friend ostream& operator<<(ostream&, const BankAccount&);
    friend istream& operator>>(istream&, BankAccount&);
    private:
        int accountNum;
        double accountBal;
    public:
        BankAccount(int = 0,double = 0.0);
        void enterAccountData();
        void displayAccounts() const;
        void setAccts(int,double);
        int getAcctNum() const;
        double getAcctBal() const;
    };

//..

bool operator<(const BankAccount& lhs, const BankAccount& rhs) {
  // insert code here, to compare 'lhs' with 'rhs'
  // for example, this:
  return lhs.getAcctNum() < rhs.getAcctNum();
};

It is generally preferrable to use free-functions for operator overloading.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that you are trying to sort BankAccount objects, but that class does not have a less-than comparison operator. The standard sort algorithm (unless otherwise specified) will use the less-than < operator to order to values. You need to overload that operator to be able to use the sort function. Here is a start:

bool operator<(const BankAccount& lhs, const BankAccount& rhs) {
  // insert code here that returns true if 'lhs' is less than 'rhs'.
};
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The memory leak is the least of your worries. This function is riddled with undefined behavior, memory leaks, heap corruption, memory corruption, and general mayhem. I'm surprised that it makes it as far as completing one iteration.

1) When you allocate a single object with the new, you need to delete it with delete, not with delete[]. This is because when you use delete[] (which is for deleting an array of objects) it will attempt to find the size (number of objects), which is not present when you allocated a single object, and therefore, it is generally going to read some undefined value for the size, and attempt to free that number of objects, which is going to cause a heap corruption, see here.

2) The use of the typedef in this local structure declaration:

typedef  struct tree
  {
         void *state;
          tree *action[4];
  };

is very antiquated C syntax. The correct way to do it in C++ (and in modern C) is as follows:

  struct tree
  {
      void *state;
      tree *action[4];
  };

3) You never initialize the state pointers within your tree nodes. When you create your new tree nodes, you (correctly) initialize all the action pointers to NULL, but you never initialize the state pointer to point to anywhere. This means that the state pointers point to some arbitrary (undefined) location in memory, and then, you perform memcpy operations on that. This is a memory corruption problem, see here. Consider …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Check out compability between processor/RAM and motherboard.

This will have to do with the motherboard. You have to know the exact make and model of your motherboard and lookup the technical sheet for it. Generally, they should specify for which family of processor it is good for, and what RAM technology it supports. The RAM is very likely not going to be a problem (at least, in the desktop world it never is, unless you have very large age gap between the two). The processor-motherboard compatibility will be a bit more tricky. If the new processor is of the same manufacturer (AMD or Intel) and of the same generation of processor, then there shouldn't be a problem, but otherwise, you can check, but don't get your hopes up. Motherboards are generally designed for a particular family of processors.

Check out compability between processor/RAM and Linux

That's not going to be a problem. The processor and the RAM are two core parts of the computer and they are not governed by software or drivers. They are run via the hardware and firmware on the motherboard, so, that's where the compatibility is critical. If the computer can run at all, then it means the motherboard / RAM / processor are compatible, and from that point, any operating system will run just fine.

Is there like "transistor count number" or something, what do I have to look at?

If you look up any specific processor (exact make …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I understand your concerns Dani. Here is one suggested "workflow":

  1. The OP writes up his question / new thread.
  2. There is some box in the form to add a bounty to his thread, with some money amounts and corresponding number of members notified. He selects the option and an amount.
  3. A randomly generated (but well-distributed) list of members appears (in no particular order), where half (or so) are selected.
  4. The OP can unselect some members and select others instead if he wishes, as long as the number of members matches the amount at the end.
  5. The OP posts the question, and the targeted members are notified.
  6. As a targeted member clicks on the notification, he can read the thread (the usual page), but has the opportunity to opt-out of responding to the bounty by forwarding the bounty to another member (from the original list, or a new one, excluding members already targeted).
  7. Or, if someone has already given a good response to the question, in the opinion of the targeted member, he can forfeit his bounty in favor of that answer / member, effectively saying "he said what I would have said, give the bounty to him". That could also trigger some rep-points for that member, as it is, effectively, an up-vote.
  8. If the OP is satisfied by one or more members' responses, he awards the bounty to him/them. The OP can also see what members received forfeited bounties.

I think that (3) is good for the reasons I …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that the bounty should be available to all, i.e., any member who ends up giving the "accepted answer" to the question should be entitled to the bounty, regardless of whether or not they were originally called out by the OP. Otherwise, it would be unfair and could promote a certain "class division", so to speak (the same "elite" members are collecting all the bounties).

The second thing is, what about maybe making the system anonymous? What I mean is, the OP puts down the bounty, and depending on the amount, it buys him a certain number of members to be notified, but then, he doesn't get to choose the members from a list, but rather some algorithm would pick out the members (without telling the OP which members were picked). I think this could remove some of the pressure of being specifically called out to answer a question, and would allow you to have a more randomized algorithm that wouldn't just always pick out the same top-ranking users all the time. I imagine that many members that are somewhat new to Daniweb (and even veterans) don't really know, by name, that many other members that could answer their question, and would just end up picking all the top members from whatever forum they are posting in. For example, in the C++ forum, the highest profile members (which are pretty much me, AD, and deceptikon) would be on the top-three of everyone's bounty, all the time, which isn't very …

ddanbe commented: Could not agree more. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So Linux, is like kind of worktable and all the applications are tools?

Yes, like any other operating system. The main difference with Linux is that it is far more deeply modular. Except for the kernel itself, which is one monolithic block (but modules can be added to it), everything else that makes up the system is a large collection of small tools. In fact, the proper name for Linux is GNU/Linux because the GNU tools (e.g., bash, tar, gzip, gcc, cp, less, cat, dd, mount, ping, su, etc...) are what really make the operating system useful; the kernel by itself wouldn't be of much use. If you take any Linux distribution and look into the /bin folder, you find there all the core GNU tools without which the system wouldn't really work. It would be a really bad idea to remove any of these. The stuff you install later usually end up in the /usr/bin folder (or /usr/local/bin), which is more for accessory stuff.

In terms of terminology, we generally talk about "libraries", "tools", and "applications". Libraries are collections of executable functions that can be called from other programs or libraries, i.e., they are what programs use to do things. Tools are generally very small command-line programs that do one specific task (like diff which just tells you what is different between two files or folders) and they can be used directly in the terminal, or used in the back-end of other programs (tools or …

RikTelner commented: Uhm, he just explained everything I could ask for. Congratulations. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Oh, and another concern I would have about this system is the pressure that it puts on the answerers to answer. I get that this is the intent, but is it really desirable? What I mean is that currently, I usually just browse the questions and answer those that are interesting, quick to answer, have not been answered yet (or answered wrong), or all or any of the above. I'm not sure that the influence of money is all that great in that mix of reasons to answer a question. For example, currently, when there is a "OP Sponsor" tag, I'm more inclined to click on the thread, but I don't necessarily feel pressured to answer any more than with a normal thread. Putting money on targeted bounties puts additional pressure (e.g., I feel bad if I don't really want to answer that question, when I've been called out to answer it). It's similar to when people PM me (or chat) to ask me to check out a particular thread they posted, I sort of feel like I'm letting them down if I don't answer, but at the same time, I might be busy or otherwise not really interested in that question. I guess what I'm saying is be careful about a feature that might have the side-effect of giving your members a guilty conscience.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The dd command is essentially just a raw byte copying program. If the "in file" is a disk and the "out file" is a file, it just writes the entire content of the disk (byte-per-byte) into the file. If the "in file" is a file and the "out file" is a disk, it just writes the entire content of the file (byte-per-byte) into the disk. It's that simple.

WARNING: Be careful, dd is a very close-to-the-metal utility, there are no safe guards in that application. It does a pure raw byte-per-byte overwriting of the destination media (disk or file). If you have doubts about what you are doing with it, you probably shouldn't be doing anything with it.

rubberman commented: Succinctly put Mike. :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First of all, the "systems programming" term is a bit general and refers to many areas, which usually have the following characteristics: (1) high performance requirements, (2) low overhead (memory, latency, etc.), and (3) involve building a complex software infrastructure. Examples of this might include OS kernels (or device drivers), compilers, database servers, virtual machines, computer games, etc..

Obviously, stuff that is more low-level (kernels, drivers, embedded software) also requires more low-level facilities, like being able to manipulate alignment, change individual bits, arrange precise memory layouts, and arbitrary aliasing (pointers, and unchecked type-casting). This, alone, rules out most of the "safe" languages, whether you can compile them to native code or not. For example, in Java, even compiled to native code, you cannot do any of these things, at all, so, it's ruled out.

Most "high-level" languages are essentially categorized as "high-level" specifically because they don't allow you to do any of these "low-level" things. So, that's a pretty important dividing line. And that's why, for low-level applications, the list of reasonable candidate languages is pretty short, mostly has Assembly, C, C++, D, Ada, and maybe a few others. Traditionally, people also classify languages as "low-level" in reference to their lower level of abstraction (conceptually moving away from the machine), but I find that classification rather pointless because it doesn't convey the real reason why low-level languages are used for low-level tasks, not because they lack abstraction, but because they grant low-level access (e.g., C++ does not lack abstractions, but …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is the documentation for the random function. It appears that random(N) generates a number between 0 and N-1, very much like the more standard rand() % N would do. The randomize() function is the equivalent of srand((unsigned int)time(NULL)), which is used to seed the random number generator.

So, to the OP's question, it appears that your friend's logic is correct, i.e., the code random(2) + 2 should output either 2 or 3.

It goes without saying that these functions are not recommended at all. First, they are not standard functions (look at rand() / srand() for standard C functions, and look at the <random> header for the standard C++ options). Second, these functions are only available in Turbo C, which is more than 20 years old, and pre-dates even the first official version of C++, and is barely more recent than the first ISO-standard version of C. And third, these random number generators are poorly implemented (mod-range is non-uniform, and seeding with time is not great).

sepp2k commented: +1 +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Homeopathy is major BS. No doubt about it. Basically, if its principles were true, dropping one drop of beer in the ocean would cure all the alcoholics of the world. Yeah, right!

At best, it's nothing more than a placebo.

ddanbe commented: I could'nt agree more. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Back in the day, I created some water effects (and other similar effects) using Perlin Noise. There are also alternatives to that classic method, but the general idea is the same. You can generate textures, height-maps (almost-flat mesh, with a regular x-y grid), and/or bump-maps (a texture that is used as "bumps" instead of colors).

The nice thing with these "noise" methods is that they are adjustible and additive. So, for example, you could have a height-map that contains only the most visible waves (low freq., high amplitude), and on top of that, a texture and bump-map for smaller ripples.

To make things dynamic (moving with time), the simplest method is just to generate the textures / maps at regular intervals (e.g., once or twice per second), and interpolate (e.g., through a pixel / vertex shader) continuously between the last and the next model. That has worked out very well for me back in the day (more than a decade ago), and as far as I know, this is still a popular method (it's just that now, you can do it with much more detail and depth that I used to be able to do it with the older hardware).

Also, for water, you have to remember that specular reflection (mirror-like reflection) is very important to really achieve the perfect effect. And bump mapping also becomes even more important when specular reflection is applied.

If I were you, I would say, just go step …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

don't we run the risk that 2 different threads will first use it at the same time?

That is correct. Local static variables are thread-safe. The standard requires that some form of synchronization is used to guarantee that the object is initialized exactly once. I covered this is a thread a little while back, see here.

What is the underlying logic behind that difference of treatment here between the inline and non-inline function?

A function that is defined in a particular translation unit will become available to be called (after loading the program) when all the static data (e.g., global variables or static data members) defined in the same translation unit are also loaded and initialized. For functions that have been inlined by the compiler, their definitions actually appear where they are called (that's what inlining means), which could be in a different translation unit, and therefore, it cannot be guaranteed that the static data from another translation unit has already been initialized or not. That's the difference. This is just another case of the static initialization order fiasco.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

why switch to windows 8?

Do these reasons count:

  • Mental illness
  • Masochism
  • Seeking relief from the fat wallet syndrome

I'm just kidding. ;)

I've never used Windows 8, so I can't really comment.

Because any mistake cannot be tolerated? Sometimes I think that the Microsoft hate is taken a little too far. Innovation involves risk of mistakes, and an iterative approach as real feedback comes in

I think that what makes it more aggregious with the fumbles of Microsoft with Windows is the fact that (1) you have to pay for that faulty product, (2) you cannot choose an alternative if you don't like their "creative direction", and (3) you have to wait 3 years for the next iteration.

If you compare this, for example, to the uproar when Ubuntu introduced the Unity interface. It was a risk and many people didn't like it. For people who really didn't like it, there were plenty of alternatives (using an older version, installing the classic UI, installing a different distro, etc.). For most, they complained for a little while, but by the time the next release came (6 months later), most of the problems had been worked out, and most people were happy. And there is only so much you can complain about something you get for free.

With a faulty version of Windows (like 98, Me, Vista, and 8), the situation is completely opposite. You generally don't have alternatives when buying a new computer. You have …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So the solution would probably be that when the compiler encounters a template cstr array argument it puts it in a special area of string storage in its internal representation.
At time of linkage with other compilation unit, all those special storage area are merged together so that any duplicated cstr gets a single final storage area/address in the final .exe.
By keeping a separate area for those strings, the compilers avoids mixing them with other strings which would not be affected by this.

Yes. That is the solution, in general. This is pretty much what the compiler does for integral template parameters (e.g., int). The point is that an integral type (like int) can be dealt with at compile-time, meaning that the compiler can compare integer values to determine that they are equal, and it can also create a hash or some other method to incorporate the integer value into the name-mangling of the instantiated template. So, when you have some_class<10> in one translation unit and some_class<10> in another, they will both resolve to the same instantiation and therefore can be merged or otherwise considered as the same type at link-time.

If string literals could be treated by the compiler the same way as integral constants, then the compiler could do the same. However, types like char* or char[] (which are identical, by the way, if there is immediate initialization) have the issue that they could also just point to a string with external linkage, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The implementation occurs at the preprocessing step.

It doesn't matter when it occurs. Moreover, it cannot occur at the preprocessing step, because it requires semantic analysis, which occurs at the compilation step.

Anyways, what you demonstrated there is what compilers already do when instantiating templates. The addition of the static member is not really important, compilers have other mechanisms for that purpose. This is not the core of the issue at all, and the problems I mentioned still apply, especially the ODR violation!

Here is a more explicit illustration of the problem:

In demo.h:

#ifndef DEMO_H
#define DEMO_H

template <char* Str>
struct demo { /* some code */ };

void do_something(const demo<"hello">&);

#endif

In demo.cpp:

#include "demo.h"

void do_something(const demo<"hello">& p) {
  /* some code */
};

In main.cpp:

#include "demo.h"

int main() {
  demo<"hello"> d;
  do_something(d);
};

Now, the sticky question is: Is the "hello" string in demo.cpp the same as the "hello" string in main.cpp? If not, then the type demo<"hello"> as seen in main.cpp is not the same as the type demo<"hello"> seen in demo.cpp.

Consider this (stupid) piece of code:

In half_string.h:

#ifndef HALF_STRING_H
#define HALF_STRING_H

template <char* Str>
struct half_string { 
  char* midpoint;
  half_string() : midpoint(Str + std::strlen(Str) / 2 + 1) { };

  static const char * p_str;
};

template <char* Str>
const char * half_string<Str>::p_str = Str;

std::string get_first_half(const half_string<"hello">&);

#endif

In half_string.cpp:

#include "half_string.h"

std::string get_first_half(const half_string<"hello">& p) {
  return std::string(half_string<"hello">::p_str, …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, option 2 is certainly not possible. The problem is not with the instantiation of the class but with the identity of the instantiation of the class (or specialization). You cannot insert, within the class, the entity that defines its identity, it's a circular logic problem, similar to trying to create an object of an incomplete type.

Option 1 could technically be done, but there are still a number of problems with this. For one, allowing the hidden creation of an externally visible symbol is not something that would sit well with some people (not me personally, but some people wouldn't be happy about that).

Another important issue would be about this situation:

demo<"hello"> a;

demo<"hello"> b;

Are a and b of the same type? No. That's a surprising behavior that most novices wouldn't expect, and also the compiler cannot, in general, be required to diagnose this kind of a problem (even though it could emit a warning). In other words, this could easily be a source for a silent bug. Now, the programmer could fix this by doing this:

typedef demo<"hello"> demo_hello;

demo_hello a;
demo_hello b;

but is that really better than this:

char hello[] = "hello";

demo<hello> a;
demo<hello> b;

And also, with the typedef solution, you still have a problem when the type demo_hello is used in different translation units, because, again, their types will be different. And that leads to a violation of ODR (One Definition Rule), which the C++ …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This problem is actually very straight forward. It is allowed to have pointers are template parameters. However, you can only instantiate the template with a pointer to something that has external linkage. So, your example works (with text) only works because text is a global variable with external storage. If you change its linkage to internal, it doesn't work:

template <char * P> struct demo { };

static char text[]="hello";

// since this is already allowed:
demo<text> dt;   // error, text has internal linkage

The problem with this is that things (variables, functions, etc.) that have external linkage have a program-wide address, i.e., an address that is resolvable during compilation because it will end up at some fixed address in the data section of the final program. This allows the compiler to form a consistent instantiation of the template.

When things have internal linkage or no linkage, there is no fixed address, in fact, there might not be an address at all (could be optimized away, or created on the stack). Therefore, there is no way to instantiate a template based on that non-existent address. When you have a literal string, like just "hello", it's a temporary literal with no linkage. This is why this thing cannot work, and will never work.

You have to understand that C++ is really close to the metal, and most of the limitations that may seem excessive are actually there because this is where the real world implementation issues collide with theoretically …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

the very very safe side, in section 25.2.2 the use of string litteral as template argument (apparently correctly declared erroneous on page 724) but used later happily in 2 examples on page 725
Vec<string,""> c2
and later
Vec<string,"fortytwo"> c2;
that surely must have been written by the same student who wrote the ADL example, right?

Wow... you are pointing out some real flaws in that book. I'm starting to have doubts about the care that Stroustrup put into that book (which I have not read, beyond what you have pointed out so far), because these are some serious things that I'm pretty sure any competent reviewer should have picked up on. In that example, it is not quite using a string literal as a template argument, but it's still wrong, in fact, it's double wrong. The idea here is that the string literal is (presumably) converted to a constexpr std::string object and that that object is used as the template argument. That's wrong because (1) you cannot use arbitrary literal types as template value parameters (only integral types), and (2) a constexpr string is not even a literal type because it is a non-trivial class. There was a proposal for C++14 for allowing arbitrary literal types as template value parameters, but even though Stroustrup states in that section of the book that this restriction is there "for no fundamental reason", I think that this proposal is dead in the water, AFAIK, because there are indeed fundamental …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1) Section 28.2.1.1 p784

You are indeed missing the main point here. The reason why you want to avoid the template instantiation is because of the failure case, not the successful case (as you tested). The thing is, if you want to instantiate a template like Condition<A,B,C>, the types A, B, and C must be instantiated before you instantiate the Conditional template. By analogy to normal functions (instead of meta-functions, what Stroustrup calls "type functions"), before calling a normal function, each parameter must be evaluated. Similarly, before instantiating Conditional, all arguments (A,B,C) must be instantiated.

The case that Stroustrup is describing here is when one of the arguments (say, B) cannot be evaluated when the condition (A) is false. This isn't really a matter of template aliases versus the "old-style" typename ..::type technique. For example, if you had the following:

typename std::conditional<
  std::is_integral<T>::value,
  typename std::make_unsigned<T>::type,
  T
>::type

There is a problem because when the typename std::make_unsigned<T>::type argument cannot be instantiated (because the T has not unsigned variant), then the whole thing cannot be instantiated. In reality, in all the cases when make-unsigned would fail, we also know that is-integral would instantiate to false, and therefore, the make-unsigned argument is never really needed in that case. In other words, the make-unsigned argument is prematurely instantiated, and this can cause obvious problems. In the example here, when is-integral fails, the conditional is supposed to return T, but instead, it will fail with a compile-time error.

This is also …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just so that people understand what this discussion is about. I'm gonna post the essential bits of code that from that section of the book.

The first solution that he puts forth (that you are critiquing) is this:

template <typename Node>
struct node_base {
  Node* left;
  Node* right;

  node_base() : left(nullptr), right(nullptr) { };

  void add_left(Node* ptr) {
    if( !left )
      left = ptr;
    else
      // .. do something else (?)
  };

  //... other tree-manip functions..

};

template <typename ValueType>
struct node : node_base< node<ValueType> > {
  ValueType value;

  node(ValueType aVal) : value(std::move(aVal)) { };

  //... other value-access functions..
};

Now, at face-value, this seems, as you said, sort of pointless, but I would disagree, and even more so considering that this is a bit of a set-up for what comes later in the same section (the next couple of pages), where the motivation for this becomes even more.

But you can already see a hint of what the purpose of this "very complicated structure" is. And by the way, if you think this is a complicated structure... man, wait until you get a load of some serious data structure implementations, this thing is a piece of cake in comparison. So, the thing to observe here is that in the base class I wrote "other tree-manip functions" and in the top-level class I wrote "other value-access functions", and that's already one reason (and not so obscure either) for splitting things up like that, because, if nothing else, it …

StuXYZ commented: Very clear +9
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

lot of people usually don't use const with pointers even when they don't intend to change what the pointer points to, and I just want to know why that is.

This is mainly a reflection of the ineptitude or laziness of many programmers in C++. First of all, const-correctness is something that is very unique to C++ (and D), and doesn't really exist in its complete form in any other language (it exists in C (since C90) but it's limited, and there are some minor features in Java and C# that attempt to mimic it but they are far too limited to be used in the same way). Even in C, using const is not too popular for historical reasons (lots of C programmers and code dates back to before 1990). In other words, anyone that is coming to C++ from almost any other language is probably not familiar with or used to writing const-correct code.

Another reason is that some people lack the technical know-how to write const-correct code. I mean that once in a while, you have to bend the rules a little bit, either because you are interfacing with an older library, or because you need to modify some data member within a const member function. C++ gives you some tools to bend the const-ness rules, in particular, the const_cast and the mutable keyword (to apply to data member that are "exempt" from the constness of their containing objects). If you don't know how to …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with deceptikon. Learning the way you describe is nice from the perspective of acquiring a deep understanding of everything. I did something similar with mathematics, from basic logic axioms up to calculus of variations, and it was very enlightning and amazing. However, doing this requires, as a pre-requisite, a strong and unwaivering interest in the subject matter. I already did a lot of math and loved it before I went on this bottom-up journey. You have to build the passion and the curiosity before you go on that road.

Remember, the challenge of teaching is not about what is the most logical way to present the material, it's about sustaining interest or fostering a passion for the subject. If you had a matrix-style brain-plug, I agree that your description would the correct order in which to download the knowledge (bottom-up), but until then, we are stuck with those real challenges of pedagogy.

Personally, I sort of learned from the top to the bottom. I started with a language heavily inspired from Visual Basic (a kind of open-source clone of VB). I was instantly hooked on this amazing power to create applications that looked just like the "professional" software, I was just amazed that I could do that with just a bit of brain-to-finger gymnastics. I got tired of the limitations of this VB-like language (and library), so I moved on to something more "mid-level", which was Delphi (an object-oriented language somewhere in between C++ and C#, but derived …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This actually has nothing to do with the move constructor.

The reason why the move constructor is not called is because of return-value-optimization (RVO). This is one of the rare cases where the compiler is allowed to change the behavior for optimization purposes. The point here is that RVO allows the compiler (under some circumstances) to construct the return value of a function directly within the memory of the variable in which the result is to be stored in the call-site. So, the return value of the function createA() (called "a" internally) will be created directly in the same memory that the parameter "a" to the function print() will occupy. This allows the compiler to avoid an unnecessary copy (or move). This is not always possible, but it often is, and it certainly is happening in the trivial case in your example. So, that explains it.

Now, like I mentioned, there are cases where RVO cannot be applied. And so, we can use that knowledge to deliberately disable RVO by writing "stupid" code within the createA() function. Now, this could depend on how smart your compiler is and the level of optimizations enabled. Basically, the main requirement for RVO to be applied is that there should be a single return value in a function, this means, either you have a single named variable for the return value and all the return statements return that variable (this case is called "Named RVO" or NRVO), or you have a single return statement. …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@Excizted: I have to warn you that your post is close to violating the "Keep it Pleasant" rule of this forum. We are a welcoming community and we strive for enriching discussions. Your post used bad language and insulting words. As for your remark on "and so what? Do you want us to care?", I must remind you that caring about people's problem is a pre-requisite for helping them, and therefore, stokie-rich is certainly justified in expecting us to care about his problem, because we do, otherwise we wouldn't come here to help people.

@stokie-rich: I can't help you with PHP/SQL questions because I don't know anything about that, I mainly posted here as a moderator. However, I would suggest you post what you have tried and describe the specific problems you are having with it. You have to help us help you, by being clear, specific and show your efforts to solve the problem.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would assume that POCO_LONG_IS_64_BIT detects if the size of the type long or int. Usually, on 64bit platforms, the int type is still 4 bytes (32bit), it's only that pointers are 64bit (to allow more than 4Gb of RAM) and that there are 64bit registers available natively (which could accommodate 64bit integers without having to break them up).

So, the logic error here is that you assumed that a 64bit platform necessarily means 64bit integers. Most compilers will not generate 64bit integers (unless it's long long int). Also, a 64bit platform does not necessarily imply that the memory is aligned on 8 byte boundaries, in fact, I don't think that it ever is, and therefore, 4 bytes is really the ideal size for an integer that doesn't necessarily need to be that big, i.e., for int or long.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Do I need a header file and a .cpp file both to use a custom defined class?

In addition to AD's remarks, I just want to clarify what you mean by "use". To create a custom class, it is conventional to place the declarations in a header file and the definitions in a cpp file ("definition" means "implementation"). But as AD remarked, you can, in some cases, depart from that convention, notably for short functions (inline) and templates, in which cases, you put the definitions in the header file directly. To use a custom class in some other code, you need to include the header file in order to compile that code, and then, you need to link with the compiled cpp file for the custom class. The compiled cpp file can be either compiled along with your other cpp file(s), compiled separately into an object file (.o or .obj) and then linked with your other cpp file(s), or it can be bundled into a static or dynamic library and then linked with your other cpp file(s).

I highly suggest that you read my comprehensive tutorial on compiling C++ code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

what do you prefere for naming: database - c++ - java - php?

Like they say: "When in Rome, do as Romans do."
To me, it's all about whatever is conventionally used in the project or language in question. None is intrinsically better than another, the only thing that makes one notation superior to another is how accustomed the most-likely readers are to it, that's why they're called "conventions". If I program in my C++ library or in Boost, I follow the STL-style custom (lower-case, underscore-separation for most things, except template arguments (and concepts) in CamelCase), because that is the established custom there (and in most "expert" C++ libraries). If I code in Clang (also C++), which makes abusive use of OOP and is heavily influenced by Java and Objective-C, then I write in that Java-style CamelCaseObject.doSomething(); style.

I tend to prefer the C++ STL style because that's what I'm most used to, that's all. I tend to dislike the Java-style CamelCase notation, but that is mostly because I do a lot of C++, and Java-style notation is usually an indicator of Java-style programming, which, in C++, is a guarantee of very poor code quality. So, that's why it's off-putting to me, because I get a bad feeling about what I'm getting into.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A character array is just what it sounds like, it's a raw array (sequence of contiguous memory) of characters (1 byte numbers that are interpreted as characters when printed on a screen) with the end marked with a 0 character (null-terminated strings). That's about all you get with a character array, it's just raw memory and you have to manually allocate / deallocate it, use C-style functions like strcmp or strcat to do operations on that memory, and you have to worry about capacity. Character arrays are essentially the only option in C for representing strings. It can be useful to know about them and how to work with them, mainly as a way to build your "low-level understanding" of things, but they are not really used in practice in C++, unless you are dealing with some C code.

Strings, as in, using the std::string class, are the proper way to deal with strings in C++. This is a class that encapsulates a dynamic array of characters and provides a number of useful features to manipulate strings. For one, it automatically handles the allocation / deallocation of the memory, as well as increasing the capacity whenever needed (as you append stuff to the string). That, by itself, is a sufficient reason to use C++ strings over the C-style character arrays, almost always. In addition, C++ strings have a number of nice functions and operators that allow you to compare, concatenate, split, and transform the strings very easily and safely. They …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It wasn't me who down-voted that post. But I can understand why: (1) just one line of code with no explanation is not very helpful, (2) the code is indeed wrong (undefined behavior), and (3) the corrected version of that code (Kristian_2's version) is still problematic due to overflow problems.

The reason why b = a + b - (a = b); is wrong is because of the order of evaluation of the parameters. The first "a" appearance (left-to-right) in that statement could just as well evaluate to the original value of "a", as it could evaluate to the new value of "a" (which is "b"). The code works if you assume that the first "a" appearance is evaluated to the original value of "a". But there is no guarantee that this will be the case, i.e., it is undefined behavior (UB). It's basically the same problem as with (++i + i++) which could evaluate to just about anything.

Also, even if it wasn't undefined behavior, it is still a swap method that requires additional storage, because a temporary variable must be created (by the compiler) to store the original value of "a" while it is being assigned a new value, and so, the equivalent code would be int t = a; a = b; b = t;, which is the traditional swapping method using a temporary variable.

And the reason why Kristian_2's version is also problematic is because it cannot work for all numbers, due to overflow. If both …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Are you talking about the xor-swap trick? Like this:

void swap(int& x, int& y)
  x ^= y;
  y ^= x;
  x ^= y;
};
rubberman commented: Elegant. The question is whether 3 xors is more efficient than one stack operation and 3 assignments... Probably! :-) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You neatly list the case where it is not appropriate in a typical use of a list, certainly not in the type of list I had allured to, mainly the case where you have thousands of list most of which are empty. In what circumstance would you encounter typically this? In a hash table, the list being used to deal with collisions.

I guess you have something specific in mind, and if you can justify your choices, then great! No harm done in raising questions.

Generally, I despise linked-lists. Every time I've included them in the possible options for some performance-sensitive code, the comparative benchmarks releaved that they were by far the worst choices. At this point, I've pretty much given up on ever using them.

I use linked-lists sometimes in non-performance-critical code when I want a self-managing list (i.e., just the chain, not the top-level class), which is also why it's used for hash table buckets.

Anyhow, I'm just not very impartial when judging linked-lists.

So, if your application is for buckets of a hash table, then why would you not just use the chain of nodes, without the "list" class, as it is normally done.

I thought you might have another trick that would allow me to have a policy with / without size without using multiple inheritance (because of vs2013 lack of EBCO) and without using my recursive inheritance which up to now still remains the only way to do this when there are …