mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Lol! So, does that mean that people who suffer from this condition can never seek treatment from fear of being confronted with the word that describes their condition?

Or maybe the word itself is the treatment. Like writing 100 times the sentence:

"I will not let hippopotomonstrosesquipedaliophobia get the best of me!"

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There must be a purpose to what you want to do, otherwise, there can be infintely many ways to partition countries or the Earth. There are natural partitions (tectonic plates, geology, climate, basins (watersheds), etc.). There are geo-political partitions (country, province / state, county / parish / district, municipalities, etc.). There are demographic partitions (city centers, boroughs, etc.). There are economic partitions (rich vs poor, resources and main regional industries (e.g., "Silicon Valley"), etc.). There are cultural partitions (the "Western world", the "Islamic world", etc.), language partitions, and so on.. Many of these boundaries coincide, but many also differ significantly (e.g., under some partitions, England and Australia would be very close, under others, they would be worlds apart).

The point is, each partition system serves a particular purpose, and without a purpose, your question is ill-defined. DavidB just gave you one example of a purposeful partition system, which is the postal / zip code system, whose purpose is to dispatch mail effectively (it's essentially a hash-table system, in computer science terms).

If your purpose is knowing what body of government regulate various things where you are, the useful partition is a political / administrative one, like country -> province -> county -> municipality -> borough, so you know where to address your concerns from when the trash is going to be picked up (borough), to trying to get an amendment passed into the constitution (country).

So, what's your purpose?

Canadian system, which uses postal codes like M5B …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Use VLC. Windows media player is a joke.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You just need to move line 7 to between line 11 and 12. That will fix it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that your operator== function is defined in the header, which means that it will be compiled for each cpp file that includes it, leading to multiple definition errors (see One Definition Rule). What you need to do is declare the function in the header and implement it in the cpp file:

In Vector.h:

//..

bool operator==(const vector& v1, const vector& v2);

//..

In the Vector.cpp:

// ...

bool operator==(const vector& v1, const vector& v2){
    if ((v1.GetX() == v2.GetX()) && 
        (v1.GetY() == v2.GetY()) &&
        (v1.GetZ() == v2.GetZ()))
        return true;
    else
        return false;
}

// ...

Also, notice that I used pass-by-reference instead of values for the function parameters.

Also, you are playing with fire with your code. You should never do using namespace std; in a header file, and you should not create a class called vector in the global namespace as well. This will cause a conflict with the standard vector class template.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree. The reason documentation is often very poor is due to the boredom factor. Writing documentation like overviews, examples and tutorials is just extremely boring, and nobody wants to do it. At least, that's the only reason why my library is not very well documented.

At least with the fairly well-established practice of interface commenting and the automatic generation of reference documentation (e.g., doxygen), there is at least a decent start, which is better than nothing, but not that easy to navigate either.

A lot of what actually makes a code-base easy to understand is when it relies on good and well-established coding practices. When there are no awkward design choices or hidden surprises in the behavior, it's a lot easier to deal with. These practices include things like flat class hierarchies, value-semantics, hidden polymorphism (i.e., non-virtual interfaces), no side-effects within functions, const-correctness, just to name a few.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is indeed something of a problem. It is always difficult to jump into a new and large code-base. You should concentrate on any available overview documentation (e.g., explaining how it works or is structured), tutorials (e.g., showing how to make mods), and reference documentation (e.g., class references).

If you need to really dig into the source code of the library / application, then you need to find a relevant starting point. This could be something that is similar to what you want to do or modify. When you find a decent starting point, just look it up and try to understand what it does and how it works, and work your way out from there, by looking up related code (e.g., that is called within that code you are looking at, etc.). The "find in files" is definitely a central tool when exploring a code-base because it gives you an instant snapshot of where certain things are used and declared. For example, if you want to modify the behavior of class A, then there is a good chance that you will want to look at all the places where class A is used so that you can assess if your modified version will work or if you need to modify other things.

There is no doubt that doing this is hard and takes a lot of patience. I've done this several times with different code bases (some small ( < 50,000 LOCs ), some big ( > 1,000,000 LOCs …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that you are looking in the wrong places. The reason why you cannot execute files on a USB is because the automatic mounting of USB drives in Ubuntu is somehow set to read-write permissions, but no execute permissions. Basically, the mount permissions are a mask that apply recursively to everything on the mount folder. So, you need to mount the USB drive with executable permission (rwx) if you want executable files in it to be executable when the USB drive is mounted.

This is a classic problem and it has been around for years, and it baffles me why the Ubuntu team (or anyone related to the automount feature) has not yet gotten around to adding a GUI setting somewhere for the user to change the default mounting permissions for external / USB drives. And I think rubberman is right that the automount uses the antiquated fuse-ntfs driver, which mounts ntfs without executable permission by default (or maybe not at all).

Here are some solutions:

1) Edit your fstab file for that USB drive entry. I use this for ntfs drives:

UUID=<UUID of USB drive> /media/<mount-folder>                      ntfs-3g    permissions,uid=1000,gid=1000,dmask=022,fmask=022        0       0

and you can get the UUID of your USB drive with the command $ sudo blkid.

2) Mount the USB drive manually with the ntfs-3g option. You can just do this:

$ sudo mkdir /media/<mount-folder>
$ sudo mount -t ntfs-3g /dev/sdb1 /media/<mount-folder>

where /dev/sdb1 should be replaced by whatever your device name is, you can …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The old town of Quebec city:

f949bde2420103f3d43e1defb39af51b

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Because "Imagine there are no countries" has too many feet. That song is timed at 5-7 feet per prose (I'm not 100% sure, cause counting feet in English is a bit more fuzzy), and using "are" in there results in 8 feet ("I-ma-gine there are no coun-tries").

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Some refreshing Club-Mate

Is that only to give yourself some gravitas as a programmer? Club-mate is the iconic hacker's drink.

are you familiar with mate-tea?

Yes, I used to drink "yerba mate" tea almost everyday... but then I ran out and didn't buy new stock. Thanks for the reminder.

What are you eating/drinking right now?

Currently, just savouring a glass of Lagavulin (16yo single malt), the best scotch ever. And I got a spinach pizza in the oven.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, like the link I already gave tells you:

"The standard way is to hold down (or repeatedly tap) the Shift key while you boot. Grub should present you with a menu. Choose the second option, to go into recovery mode; then choose, Drop to root shell prompt."

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Those two objectives do make sense. But they also make a nice contradiction! Keeping the money question on the down low doesn't really play well with the idea of giving it as an incentive. This is getting a bit confusing.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The main thing that determines longevity is inertia. The only reason people still use Fortran or Cobol is because of inertia. To some extent, C as well, although it is often a practical (minimal) language for small tasks.

Therefore, the languages that are likely to live for at least a couple of decades beyond today are those languages with most weight today, which is essentially C, C++ and Java, and PHP and Python (in their application domains), anything else is somewhat precarious. Also, languages that are tied to a specific company, instead of governed by an open standard, will always be dependent on that companies continued market share and willingness to support the language; this includes, for example, Objective-C (Apple), Go (Google), and .NET (Microsoft). But, of course, you already know this, because you depended on VB (classic)! There is a reason (beyond the technical ones) why the classic paradigm with VB applications was to make the VB code just a thin front-end (GUI) that called the "real" back-end C++ code, because VB could killed anyday (as it was) and that would only mean having to port / re-write some trivial front-end code, while protecting (in time) the real important, "hardcore" code (back-end) that would be very tricky to port.

Microsoft forced the move to NET by deprecating both VB Classic and VC++. I think little of NET as it bloated big time IMO.

It's not true that MS deprecated VC++. In fact, lately, there has been more …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, I think some of us got the wrong idea about what you are planning to do. I think we thought bounties were given as a sort of "come get a cash reward for replying to this thread!", but from our discussion, Dani, it is clear that you want the monetary aspect to be very low key, under the radar. I didn't mean to make the monetary side front and center, I just thought that this was what you had in mind already. But if the monetary side is very discreet, and there is no overt pressure of "hey, the OP paid money directed at you (among others), so you better reply!", then it's all fine with me, and I would retract any objections expressed earlier on this thread.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

System76 is another well-known company that sells computers with Linux pre-installed.

Burning the image to a DVD is not a particularly good option, in my opinion. Booting up from a DVD is very slow. You should put it on a USB stick instead. This is going to be faster when loading and running the live OS, and you get the benefit of being able to install things onto it, add things to it, and save your settings. For example, if you need to install a proprietary graphics driver to run properly on the system, you'll be able to do so if you run it from a USB stick. Or, if you anticipate having to install anything special, you can put the packages (deb files) onto the USB stick ahead of time.

As far as the fear that Ubuntu simply won't work on your laptop, I think that this is very unlikely. I've never seen Ubuntu completely not work on a computer. There can be some problems, especially with graphics drivers, but the worst you can get is that it is sluggish and slow, but it works enough to allow you to make the necessary corrections (configs, driver installations, etc.) to get it to work perfectly. We've discussed compatibility issues at great length already in other threads, so I won't delve into that further here.

What I mean is, the content was on the screen, but the flash, the light that is on screen, didn't "boot" up.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, as AD says, you have to specify the output name with the -o option. Clang uses all the same options as GCC, as much as possible. So, if you need instructions on using clang and you can't find what you need, just look for instructions on using GCC, they usually apply to clang as well. Clang is meant to be a drop-in replacement for GCC. This is because most build systems (makefiles, cmake, etc..) are mainly geared towards dealing with GCC and ICC, which use the same main options and behaviors, and they're the two main compiler suites used for professional development, and so, it makes sense for clang to follow that de facto standard.

I stumbled upon the information that clang++ is a better alternative to g++ on Mac.

Yes, indeed. LLVM/Clang is the official C++ / C / Objective-C compiler for Mac. It's often difficult to get good up-to-date versions of GCC to work on Mac, due to Apple not allowing it in mainstream software repositories (you have to compile it from source or get it from third-party sources). Not to mention that clang is a better compiler than GCC, whichever the operating system you use.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with rubberman that you don't really need to know assembly these days for most tasks. There are still some dark corners in which you will find assembly code, which mostly include very low-level pieces of things like operating systems or compilers, very critical functions that need nano-optimization (i.e., simple functions that are called so often that every clock cycle counts), and very tiny hardware platforms that don't support anything else (e.g., programmable integrated circuits (PICs), which typically just run a simple input-output mapping via a few lines of assembly code). The point is, today, if you are writing assembly, you are probably writing a very short piece of code for a very specific purpose. People simply don't write anything "big" in assembly, and frankly, I doubt anyone ever did, even back in the day (it's just that small systems were the norm back then).

However, it is very common in the programming field that people know how to read assembly, more or less. As a programmer, especially in some performance-critical areas, you occasionally encounter assembly code, but as a product of compilation. For example, you write a program and you want to know what assembly code the compiler produces when compiling it (which you can get with the -S compiler option). This can be useful to see how the code is optimized, if you need to change anything to get it to be more efficient once compiled, etc.. Then, there is also the occasional debugging tasks that involve …

rubberman commented: Good post. I have written boot loaders for x86 RT systems - not simple, but interesting. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hello and welcome. (Gruss Gott und willkommen!)
I hope to see you around in the C++ forum.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is intelligence anyway,.. Technically, many systems are intelligent to a degree, and often in very narrow bands.

That is clearly one of the core questions that people have been trying to answer, and there is definitely no clear line separating what we would recognize as real "intelligence" and what is simply a smart solution or sophisticated program. There seem to be a few critical components that are the distinguishing factors: high-level reasoning, learning and situational awareness. I think that any system that lack all of these cannot really be considered AI, and one that has all three is definitely very close to being really "intelligent".

In the department of high-level reasoning (a.k.a. "cognitive science"), the areas of research that are being pursued there are things like probabilistic computing (and theory), Bayesian inference, Markov decision processes (and POMDP), game theory, and related areas. The emerging consensus right now is that approximation is good, fuzziness is good. Reasoning means understanding what is going on and predicting what will happen (possibly, based on one's own decisions), and doing that exactly is impossible (intractable), even we (humans) don't do that. This is why probabilistic approaches are much more powerful, because you can quickly compute a most likely guess, and some rough measure of how uncertain that guess is, and then base your decisions on that. You can see evidence of that with Watson, as he (it?) always answers with …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, you can either define the operator as a member of the class or as a free function.

As a member function, you do it as follows:

//A class for a number of bank accounts.
class BankAccount
    {
    friend ostream& operator<<(ostream&, const BankAccount&);
    friend istream& operator>>(istream&, BankAccount&);
    private:
        int accountNum;
        double accountBal;
    public:
        BankAccount(int = 0,double = 0.0);
        void enterAccountData();
        void displayAccounts() const;
        void setAccts(int,double);
        int getAcctNum() const;
        double getAcctBal() const;

        // Add the less-than operator:
        bool operator<(const BankAccount& rhs) const;
    };

//..

bool BankAccount::operator<(const BankAccount& rhs) const {
  // insert code here, to compare 'this' with 'rhs'
  // for example, this:
  return this->accountNum < rhs.accountNum;
};

And as a normal (free) function, you would do:

//A class for a number of bank accounts.
class BankAccount
    {
    friend ostream& operator<<(ostream&, const BankAccount&);
    friend istream& operator>>(istream&, BankAccount&);
    private:
        int accountNum;
        double accountBal;
    public:
        BankAccount(int = 0,double = 0.0);
        void enterAccountData();
        void displayAccounts() const;
        void setAccts(int,double);
        int getAcctNum() const;
        double getAcctBal() const;
    };

//..

bool operator<(const BankAccount& lhs, const BankAccount& rhs) {
  // insert code here, to compare 'lhs' with 'rhs'
  // for example, this:
  return lhs.getAcctNum() < rhs.getAcctNum();
};

It is generally preferrable to use free-functions for operator overloading.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that you are trying to sort BankAccount objects, but that class does not have a less-than comparison operator. The standard sort algorithm (unless otherwise specified) will use the less-than < operator to order to values. You need to overload that operator to be able to use the sort function. Here is a start:

bool operator<(const BankAccount& lhs, const BankAccount& rhs) {
  // insert code here that returns true if 'lhs' is less than 'rhs'.
};
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The memory leak is the least of your worries. This function is riddled with undefined behavior, memory leaks, heap corruption, memory corruption, and general mayhem. I'm surprised that it makes it as far as completing one iteration.

1) When you allocate a single object with the new, you need to delete it with delete, not with delete[]. This is because when you use delete[] (which is for deleting an array of objects) it will attempt to find the size (number of objects), which is not present when you allocated a single object, and therefore, it is generally going to read some undefined value for the size, and attempt to free that number of objects, which is going to cause a heap corruption, see here.

2) The use of the typedef in this local structure declaration:

typedef  struct tree
  {
         void *state;
          tree *action[4];
  };

is very antiquated C syntax. The correct way to do it in C++ (and in modern C) is as follows:

  struct tree
  {
      void *state;
      tree *action[4];
  };

3) You never initialize the state pointers within your tree nodes. When you create your new tree nodes, you (correctly) initialize all the action pointers to NULL, but you never initialize the state pointer to point to anywhere. This means that the state pointers point to some arbitrary (undefined) location in memory, and then, you perform memcpy operations on that. This is a memory corruption problem, see here. Consider …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Everything is fine except for the part where you test the palindrome. You need to break out of the loop as soon as you find a mismatching character. Currently, the way it is, you are going to loop until the last character, and the bool isPalindrome will only reflect the last character (if it matches or not). You can use this loop instead to replace that palindrome checking loop:

    while(k <= tempPLength && k <= tempRLength)
    {
       if(tempOrig[k] == tempRev[k])
          k++;
       else
          return false; // return false on the first mismatch.
    }
    return true;
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Check out compability between processor/RAM and motherboard.

This will have to do with the motherboard. You have to know the exact make and model of your motherboard and lookup the technical sheet for it. Generally, they should specify for which family of processor it is good for, and what RAM technology it supports. The RAM is very likely not going to be a problem (at least, in the desktop world it never is, unless you have very large age gap between the two). The processor-motherboard compatibility will be a bit more tricky. If the new processor is of the same manufacturer (AMD or Intel) and of the same generation of processor, then there shouldn't be a problem, but otherwise, you can check, but don't get your hopes up. Motherboards are generally designed for a particular family of processors.

Check out compability between processor/RAM and Linux

That's not going to be a problem. The processor and the RAM are two core parts of the computer and they are not governed by software or drivers. They are run via the hardware and firmware on the motherboard, so, that's where the compatibility is critical. If the computer can run at all, then it means the motherboard / RAM / processor are compatible, and from that point, any operating system will run just fine.

Is there like "transistor count number" or something, what do I have to look at?

If you look up any specific processor (exact make …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I understand your concerns Dani. Here is one suggested "workflow":

  1. The OP writes up his question / new thread.
  2. There is some box in the form to add a bounty to his thread, with some money amounts and corresponding number of members notified. He selects the option and an amount.
  3. A randomly generated (but well-distributed) list of members appears (in no particular order), where half (or so) are selected.
  4. The OP can unselect some members and select others instead if he wishes, as long as the number of members matches the amount at the end.
  5. The OP posts the question, and the targeted members are notified.
  6. As a targeted member clicks on the notification, he can read the thread (the usual page), but has the opportunity to opt-out of responding to the bounty by forwarding the bounty to another member (from the original list, or a new one, excluding members already targeted).
  7. Or, if someone has already given a good response to the question, in the opinion of the targeted member, he can forfeit his bounty in favor of that answer / member, effectively saying "he said what I would have said, give the bounty to him". That could also trigger some rep-points for that member, as it is, effectively, an up-vote.
  8. If the OP is satisfied by one or more members' responses, he awards the bounty to him/them. The OP can also see what members received forfeited bounties.

I think that (3) is good for the reasons I …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that the bounty should be available to all, i.e., any member who ends up giving the "accepted answer" to the question should be entitled to the bounty, regardless of whether or not they were originally called out by the OP. Otherwise, it would be unfair and could promote a certain "class division", so to speak (the same "elite" members are collecting all the bounties).

The second thing is, what about maybe making the system anonymous? What I mean is, the OP puts down the bounty, and depending on the amount, it buys him a certain number of members to be notified, but then, he doesn't get to choose the members from a list, but rather some algorithm would pick out the members (without telling the OP which members were picked). I think this could remove some of the pressure of being specifically called out to answer a question, and would allow you to have a more randomized algorithm that wouldn't just always pick out the same top-ranking users all the time. I imagine that many members that are somewhat new to Daniweb (and even veterans) don't really know, by name, that many other members that could answer their question, and would just end up picking all the top members from whatever forum they are posting in. For example, in the C++ forum, the highest profile members (which are pretty much me, AD, and deceptikon) would be on the top-three of everyone's bounty, all the time, which isn't very …

ddanbe commented: Could not agree more. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So Linux, is like kind of worktable and all the applications are tools?

Yes, like any other operating system. The main difference with Linux is that it is far more deeply modular. Except for the kernel itself, which is one monolithic block (but modules can be added to it), everything else that makes up the system is a large collection of small tools. In fact, the proper name for Linux is GNU/Linux because the GNU tools (e.g., bash, tar, gzip, gcc, cp, less, cat, dd, mount, ping, su, etc...) are what really make the operating system useful; the kernel by itself wouldn't be of much use. If you take any Linux distribution and look into the /bin folder, you find there all the core GNU tools without which the system wouldn't really work. It would be a really bad idea to remove any of these. The stuff you install later usually end up in the /usr/bin folder (or /usr/local/bin), which is more for accessory stuff.

In terms of terminology, we generally talk about "libraries", "tools", and "applications". Libraries are collections of executable functions that can be called from other programs or libraries, i.e., they are what programs use to do things. Tools are generally very small command-line programs that do one specific task (like diff which just tells you what is different between two files or folders) and they can be used directly in the terminal, or used in the back-end of other programs (tools or …

RikTelner commented: Uhm, he just explained everything I could ask for. Congratulations. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Oh, and another concern I would have about this system is the pressure that it puts on the answerers to answer. I get that this is the intent, but is it really desirable? What I mean is that currently, I usually just browse the questions and answer those that are interesting, quick to answer, have not been answered yet (or answered wrong), or all or any of the above. I'm not sure that the influence of money is all that great in that mix of reasons to answer a question. For example, currently, when there is a "OP Sponsor" tag, I'm more inclined to click on the thread, but I don't necessarily feel pressured to answer any more than with a normal thread. Putting money on targeted bounties puts additional pressure (e.g., I feel bad if I don't really want to answer that question, when I've been called out to answer it). It's similar to when people PM me (or chat) to ask me to check out a particular thread they posted, I sort of feel like I'm letting them down if I don't answer, but at the same time, I might be busy or otherwise not really interested in that question. I guess what I'm saying is be careful about a feature that might have the side-effect of giving your members a guilty conscience.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I like the idea overall. Not uber-excited about it, but there is some potential there.

I'm not so sure about using the PM mechanism to notify about bounties. I think that if this feature takes off, some of us will have a flooded inbox. Maybe something more like the chat notifications would be more appropriate.

I also share other people's concerns about the mechanism for awarding the bounty / pot to whoever "solved" the issue. Given how many OPs leave threads "unsolved" (even when, in reality, they are pretty much solved), it casts some doubt on the ability to come up with a more reliable system when there is money involved (or more accurately, spare change). There needs to be a fool-proof system for that, e.g., some time limits, automated "solve" marking, or something.

Or is the intent that Daniweb keeps the money if the thread is never marked as solved? That sounds a bit weird, i.e., "if Daniweb cannot deliver, Dani keeps the money!".

the lure of money might be enough to get some super smart people back to contributing.

Does that include Narue? ;) I bet "she" would get plenty of bounties.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that that's the heart of the problem.

There is nothing wrong with a qualified doctor who diagnoses that particular symptoms are psychosomatic (which is the clinical term for the kind of things that placebos can affect, i.e., when your mind (e.g., stress, anxiety, etc.) causes the perceived (or real) symptoms) and then decides to use a placebo to treat it. And for that, homeopathy could be a useful... deception.

The real problem is when the practitioner himself has fallen prey to the deception. In other words, the homeopathic "doctor" who thinks that his remedies are actually effective against actual illnesses (as opposed to psychosomatic ones, but I don't mean that psychosomatic illnesses are not as real as others are). This is tantamount to quackery and could have serious consequences.

But that's really the dilemma here. If you want a reliable placebo that everyone believes is real, so that you can use it to treat psychosomatic symptoms, then you have to maintain this deception, which will inevitably lead to some people establishing a quackery practice or industry around it.

I don't know if there is much established practice in medicine (I mean, real medicine, the scientific kind) around the prescription of placebos for psychosomatic symptoms. If there isn't, there should be, with a more reliable method of deception (e.g., "fake" prescription drugs that a patient can buy at the pharmacy without knowing it's fake). With all these people who are convinced that there must be a drug to fix …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is not the sugar pills but the doctors who lie about what the pills really are.

But without that lie the sugar pills would not do anything. The sugar pills are only an accessory or a token, the placebo effect is purely a result of the false belief that the patient is getting. If you give people sugar pills and tell them what they are, and that they shouldn't have any effect, then they won't have any effect.

And yes, in proper trial tests, the process is double-blind, meaning that neither the doctor giving the pills, nor the subject receiving it, know whether the pills are real or fake. Because if the doctor knew, he could be acting differently (wittingly or not) with the different subject groups, therefore invalidating the results.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I need to know write commands

I just told you what the write command it. dd is just a copy command, it can copy files, partitions and entire disks. You just specify whatever you want as source and whatever you want as destination, and it will just copy from one to the other. So, there is no "read" or "write" command, there is only a "copy" command. If you want to take a raw image file and copy it to a disk (HDD), then you just specify your file as the source (if) and the disk as the destination (of).

from dd format

This tells me that you have no idea what you are doing. There is no such thing as a "dd format". The dd tool does not write things with some sort of format, it only make a raw byte-for-byte copy of the memory. It ignores file-systems, it ignores formats, it ignores everything and just blindly copies the raw memory from a source (if) to a destination (of). It is entirely your responsibility to make sure you are copying to right things into the right places, or otherwise, you will mess up your system seriously.

re-create a bootable disk

A bootable disk is nothing more than a disk that has a bootloader in its MBR (Master Boot Record). If you have an entire image of a bootable disk (not partition), then writing that image onto another disk should make that disk bootable …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When you have column-major ordering, that matrix would look like this in memory:

AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD

When you have row-major ordering, that matrix would look like this in memory:

AA BA CA DA AB BB CB DB AC BC CC DC AD BD CD DD

The translation part of a homogeneous transformation matrix, is in the last column (DA, DB, DC), in other words, you have orderings like this:

column-major:
AA AB AC  0 BA BB BC  0 CA CB CC  0  x  y  z  1

row-major:
AA BA CA  x AB BB CB  y AC BC CC  z  0  0  0  1

What I'm trying to find out is if I was given a matrix pulled from either platforms, would a formula for one work with the other (x,y,z translation index stay the same).

No it will not. You should abstract away that detail. Most matrix or linear algebra packages can easily accomodate either ordering by simply hiding it away in the implementation details of a matrix class. For example, here is a very simple way to handle this:

class Matrix4x4 {
  private:
    double* data;
    bool is_column_major;

  public:
    // ....

    double operator()(int i, int j) const {
      if(is_column_major) 
        return data[j * 4 + i];
      else
        return data[i * 4 + j];
    };

    double& operator()(int i, int j) {
      if(is_column_major) 
        return data[j * 4 + i]; …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The dd command is essentially just a raw byte copying program. If the "in file" is a disk and the "out file" is a file, it just writes the entire content of the disk (byte-per-byte) into the file. If the "in file" is a file and the "out file" is a disk, it just writes the entire content of the file (byte-per-byte) into the disk. It's that simple.

WARNING: Be careful, dd is a very close-to-the-metal utility, there are no safe guards in that application. It does a pure raw byte-per-byte overwriting of the destination media (disk or file). If you have doubts about what you are doing with it, you probably shouldn't be doing anything with it.

rubberman commented: Succinctly put Mike. :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The two C++ compilers that matter support it:
GCC supports it, see here.
LLVM/Clang also supports it, see here.

The Intel compiler is only good if you compile for an intel architecture (AFAIK). The same goes for the IBM compiler, only good for IBM platforms (mostly super-computers and clusters). There are no other serious compilers besides those four (GCC, Clang, ICC, IBM).

I recommend you just check the imgtec site on developer tools.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First of all, the "systems programming" term is a bit general and refers to many areas, which usually have the following characteristics: (1) high performance requirements, (2) low overhead (memory, latency, etc.), and (3) involve building a complex software infrastructure. Examples of this might include OS kernels (or device drivers), compilers, database servers, virtual machines, computer games, etc..

Obviously, stuff that is more low-level (kernels, drivers, embedded software) also requires more low-level facilities, like being able to manipulate alignment, change individual bits, arrange precise memory layouts, and arbitrary aliasing (pointers, and unchecked type-casting). This, alone, rules out most of the "safe" languages, whether you can compile them to native code or not. For example, in Java, even compiled to native code, you cannot do any of these things, at all, so, it's ruled out.

Most "high-level" languages are essentially categorized as "high-level" specifically because they don't allow you to do any of these "low-level" things. So, that's a pretty important dividing line. And that's why, for low-level applications, the list of reasonable candidate languages is pretty short, mostly has Assembly, C, C++, D, Ada, and maybe a few others. Traditionally, people also classify languages as "low-level" in reference to their lower level of abstraction (conceptually moving away from the machine), but I find that classification rather pointless because it doesn't convey the real reason why low-level languages are used for low-level tasks, not because they lack abstraction, but because they grant low-level access (e.g., C++ does not lack abstractions, but …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is the documentation for the random function. It appears that random(N) generates a number between 0 and N-1, very much like the more standard rand() % N would do. The randomize() function is the equivalent of srand((unsigned int)time(NULL)), which is used to seed the random number generator.

So, to the OP's question, it appears that your friend's logic is correct, i.e., the code random(2) + 2 should output either 2 or 3.

It goes without saying that these functions are not recommended at all. First, they are not standard functions (look at rand() / srand() for standard C functions, and look at the <random> header for the standard C++ options). Second, these functions are only available in Turbo C, which is more than 20 years old, and pre-dates even the first official version of C++, and is barely more recent than the first ISO-standard version of C. And third, these random number generators are poorly implemented (mod-range is non-uniform, and seeding with time is not great).

sepp2k commented: +1 +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I remember the mass homeopathic overdoses from a few years back. And although the NHS provides homeopathy, you can just feel their disdain for it in their own site.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Homeopathy is major BS. No doubt about it. Basically, if its principles were true, dropping one drop of beer in the ocean would cure all the alcoholics of the world. Yeah, right!

At best, it's nothing more than a placebo.

ddanbe commented: I could'nt agree more. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Back in the day, I created some water effects (and other similar effects) using Perlin Noise. There are also alternatives to that classic method, but the general idea is the same. You can generate textures, height-maps (almost-flat mesh, with a regular x-y grid), and/or bump-maps (a texture that is used as "bumps" instead of colors).

The nice thing with these "noise" methods is that they are adjustible and additive. So, for example, you could have a height-map that contains only the most visible waves (low freq., high amplitude), and on top of that, a texture and bump-map for smaller ripples.

To make things dynamic (moving with time), the simplest method is just to generate the textures / maps at regular intervals (e.g., once or twice per second), and interpolate (e.g., through a pixel / vertex shader) continuously between the last and the next model. That has worked out very well for me back in the day (more than a decade ago), and as far as I know, this is still a popular method (it's just that now, you can do it with much more detail and depth that I used to be able to do it with the older hardware).

Also, for water, you have to remember that specular reflection (mirror-like reflection) is very important to really achieve the perfect effect. And bump mapping also becomes even more important when specular reflection is applied.

If I were you, I would say, just go step …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The error comes from the fact that the compiler has no idea what stackADT is. And I have no idea what that is either. Did you forget to include the header in which that template is declared?

Also, before you get more errors, you should know that you have to put the definitions of the template code in the header file. Read this.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

don't we run the risk that 2 different threads will first use it at the same time?

That is correct. Local static variables are thread-safe. The standard requires that some form of synchronization is used to guarantee that the object is initialized exactly once. I covered this is a thread a little while back, see here.

What is the underlying logic behind that difference of treatment here between the inline and non-inline function?

A function that is defined in a particular translation unit will become available to be called (after loading the program) when all the static data (e.g., global variables or static data members) defined in the same translation unit are also loaded and initialized. For functions that have been inlined by the compiler, their definitions actually appear where they are called (that's what inlining means), which could be in a different translation unit, and therefore, it cannot be guaranteed that the static data from another translation unit has already been initialized or not. That's the difference. This is just another case of the static initialization order fiasco.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

That example uses a C99 feature called Variable-Length Arrays. This was introduced in C99, but then made optional. Generally, the compiler-vendors that support this feature on their C compiler generally support it also as an extension for their C++ compiler. For example, it works on GCC. The microsoft compiler has stopped C support to C90, and therefore, they don't support much of anything in C that is younger than 24 years old.

Anyways, strictly speaking, this is not standard C++, just a very common extension (GCC, Clang, ICC, etc.). In standard C++, the array bound must be a constant expression, as your quote says.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is another answer to your question: "Who can name two English words that are their own opposite?" Answer: Wikipedia (Don't click! It's cheating!).

Come to think of it, that's another word that is its own opposite, i.e., the word wikipedia, as it stands for both an endless repository of truths and an endless repository of lies.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

why switch to windows 8?

Do these reasons count:

  • Mental illness
  • Masochism
  • Seeking relief from the fat wallet syndrome

I'm just kidding. ;)

I've never used Windows 8, so I can't really comment.

Because any mistake cannot be tolerated? Sometimes I think that the Microsoft hate is taken a little too far. Innovation involves risk of mistakes, and an iterative approach as real feedback comes in

I think that what makes it more aggregious with the fumbles of Microsoft with Windows is the fact that (1) you have to pay for that faulty product, (2) you cannot choose an alternative if you don't like their "creative direction", and (3) you have to wait 3 years for the next iteration.

If you compare this, for example, to the uproar when Ubuntu introduced the Unity interface. It was a risk and many people didn't like it. For people who really didn't like it, there were plenty of alternatives (using an older version, installing the classic UI, installing a different distro, etc.). For most, they complained for a little while, but by the time the next release came (6 months later), most of the problems had been worked out, and most people were happy. And there is only so much you can complain about something you get for free.

With a faulty version of Windows (like 98, Me, Vista, and 8), the situation is completely opposite. You generally don't have alternatives when buying a new computer. You have …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, there is at least one that I can think of, being a French speaker. The word formidable can mean either dreadful or awesome. In French, it is only used (AFAIK) to mean awesome, but in English, it's mostly used to mean dreadful (or very intimidating). So, that's why I remember that one.

But I cannot think of another word right now, but I can still answer your question: "Who can name two English words that are their opposite?" Answer: Not me.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Aren't they the same?

Not exactly, but there are definite similarities. When you return an object from a function, there are two contexts to look at: within the function (callee) and just outside of it (caller).

In the callee context, when return-value-optimization (RVO) does not apply, the compiler will always implicitly try to use the move-constructor to create the returned object from whatever you provide in the return-statement. This is because if what appears in the return-statement is not some sort of object of a wider scope (data member of the containing class, or global variable, or passed-by-reference parameter) then it must be either a temporary object or a local object, and either way, it will be immediately destroyed as the function returns, which makes it safe to be "moved-out" of the function.

In the caller context, the object that is returned by a function is a temporary object. Here, I'm talking about that moment just after the function returns and before the return value is assigned to something (e.g., to a local variable). At that instant, it's a temporary object, i.e., an rvalue, which will bind (preferrably) to an rvalue-reference, leading to the use of move-semantics (e.g., move-constructor or move-assignment).

However, in either or both of these contexts, the move-construction can be optimized away by RVO or NRVO. This was also the case with the copy-constructors in prior versions of C++, and it is also still the case for objects that currently don't have a move-constructor. So, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So the solution would probably be that when the compiler encounters a template cstr array argument it puts it in a special area of string storage in its internal representation.
At time of linkage with other compilation unit, all those special storage area are merged together so that any duplicated cstr gets a single final storage area/address in the final .exe.
By keeping a separate area for those strings, the compilers avoids mixing them with other strings which would not be affected by this.

Yes. That is the solution, in general. This is pretty much what the compiler does for integral template parameters (e.g., int). The point is that an integral type (like int) can be dealt with at compile-time, meaning that the compiler can compare integer values to determine that they are equal, and it can also create a hash or some other method to incorporate the integer value into the name-mangling of the instantiated template. So, when you have some_class<10> in one translation unit and some_class<10> in another, they will both resolve to the same instantiation and therefore can be merged or otherwise considered as the same type at link-time.

If string literals could be treated by the compiler the same way as integral constants, then the compiler could do the same. However, types like char* or char[] (which are identical, by the way, if there is immediate initialization) have the issue that they could also just point to a string with external linkage, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The implementation occurs at the preprocessing step.

It doesn't matter when it occurs. Moreover, it cannot occur at the preprocessing step, because it requires semantic analysis, which occurs at the compilation step.

Anyways, what you demonstrated there is what compilers already do when instantiating templates. The addition of the static member is not really important, compilers have other mechanisms for that purpose. This is not the core of the issue at all, and the problems I mentioned still apply, especially the ODR violation!

Here is a more explicit illustration of the problem:

In demo.h:

#ifndef DEMO_H
#define DEMO_H

template <char* Str>
struct demo { /* some code */ };

void do_something(const demo<"hello">&);

#endif

In demo.cpp:

#include "demo.h"

void do_something(const demo<"hello">& p) {
  /* some code */
};

In main.cpp:

#include "demo.h"

int main() {
  demo<"hello"> d;
  do_something(d);
};

Now, the sticky question is: Is the "hello" string in demo.cpp the same as the "hello" string in main.cpp? If not, then the type demo<"hello"> as seen in main.cpp is not the same as the type demo<"hello"> seen in demo.cpp.

Consider this (stupid) piece of code:

In half_string.h:

#ifndef HALF_STRING_H
#define HALF_STRING_H

template <char* Str>
struct half_string { 
  char* midpoint;
  half_string() : midpoint(Str + std::strlen(Str) / 2 + 1) { };

  static const char * p_str;
};

template <char* Str>
const char * half_string<Str>::p_str = Str;

std::string get_first_half(const half_string<"hello">&);

#endif

In half_string.cpp:

#include "half_string.h"

std::string get_first_half(const half_string<"hello">& p) {
  return std::string(half_string<"hello">::p_str, …