mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Some time ago, a colleague happened to ask me if I was active on any online forums. I told him I was a mod on Daniweb. He said: "Daniweb? Never heard of it." So, I popped up a browser on the Daniweb page, and he said: "Oh yeah, the purple forum."

I think it would be kind of impossible to ever change that color at this point. And even if it was possible, I do like the purple color scheme a lot. I'm actually surprised that it isn't more common. So, I guess you can file me under the "keep it purple" camp.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

she calls the sophmores her pet peeve

So your teacher calls her students an annoyance? Sounds a bit self-defeating to me.

Living in a busy city where I walk everywhere I go, my pet peeve is all those people who just shouldn't be walking in a busy city. You know: those who walk slow and unpredictably drift from side to side so that you can't walk pass them without busting a few dancing moves; those who's field of vision is limited to the few inches of their cell-phone's screen, and inevitably bump into people or into things; those who don't understand that they should stick to the right side (right as in "not left") when crossing people coming the other way; etc. etc.

People who use "I" instead of "me" thinking that it makes them sound educated and proper

@Rev: You must hate the expression "Me, Myself and I". I totally agree with yourself.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

i tend to watch the games that have a brawl involved... is that normal?

Not so much anymore (stricter rules, and more suspensions, as opposed to just having a penalty or being sent to the locker-room). Unless you are watching some semi-professional leagues. Our local semi-professional league (Ligue de Hockey Junior Majeur du Québec, or LHJMQ) is notorious for being just a pretext to have boxing matches on the ice. And I'm sure many local / provincial leagues in Canada are the same. They literally have some players designated to do the fighting (to fight the designated players from the other team), while the others (normal players who just want to play at this semi-professional level) fill the time between the brawls by playing some hockey. People even take bets on the different fights that we all know are inevitable in any given game, kinda like unlicensed boxing matches. Basically, it's a spectacle that combines hockey, boxing and freezing cold wooden seats... fun for the whole family.. ;)

But, of course, hockey is supposed to be rough around the edges ;). But as long as players stick to the rules (legal tackles, e.g., with the shoulder or hip), and aren't too vicious, it's a fun game, but as I said, I'm not a huge fan. I find it too slow, and not inventive enough. On ice, I would watch a Bandy match any time over a hockey match (Bandy basically takes the rules of soccer (football) …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

-- Hint: I fixed your formatting. Errors are easier to spot that way. You should develop the habit of doing so yourself in the future. --

My guess is that error comes from forgetting to put the default: statement (or case 3:) between line 118 and 119. Basically, if the "enemy" in not 1 or 2, then the variables (ENatk, ENhull, ENengine, ENwepsys) remain uninitialized, which explains the weird values you get afterwards (which comes from line 128 where ENatk might be uninitialized and cause "damage" to be some very weird / large number).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@mike_2000_17 The whining is worse here in Winnipeg because we just got an NHL team back (BFHD*) after years without.

At least you guys got your team back... we're still waiting / whinning / bitching to get the Nordiques back... it was pretty much a national disaster when they were sold to Colorado and ended up winning the Cup the next season.... ahh.. painful memories.. I've never been a huge hockey fan, but the Canadiens / Nordiques rivalry was awesome. Hopefully, the fact that ice hockey is THE canadian sport will eventually seep through Gary Bettman's thick skull.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Can you explain what Line 13 is supposed to be. The line goes like that:

USHORT shape1 = triangle, square, rectangle, circle;

What do you expect that line to mean? Because I have no clue. It is definitely not valid C++ code, but I cannot tell you how to fix it until I get a clue as to what you intended it to mean.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@Rev: That's the official summer sport. Ice Hockey is the official winter sport.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Did you make sure the include the header required for the class PdfContentsGraph? Which is:

#include "PdfContentsGraph.h"

using namespace PoDoFo;  // <-- this is needed too, otherwise you have to use it as 'PoDoFo::PdfContentsGraph'.

The error message is exactly what you get when you forget to include the right header or specify / use the namespace in which the class is. Note that the "expected ';'" part is meaningless, it's just a result of the compiler being a bit confused after it could not recognize the class given in the line of code.

Also, I hope that you realized that this particular example (from PoDoFo) requires you to also download the PdfContentsGraph.h header and its source, as seen in the bottom of this page.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are a number of ways to do this, depending on the features you require.

If all you want is a unique account number for each new account that is created, you can control that with a factory function and a static "last" account number being tracked. In that case, you probably also want to make the account non-copyable and only carry the account through a smart pointer (shared_ptr) which you can get either from the standard library (if your compiler is recent enough), or through TR1 (extension to standard library), or through the Boost library. Here is a simple example of what I described:

class bankAccount
{
public:
    ~bankAccount() {}
    int getNumber() { return accountNumber; }
    int getBalance() { return accountBalance; }
private:
    // private constructor:
    bankAccount(int aAccountNumber) : accountNumber(aAccountNumber), accountBalance(0) { };

    bankAccount(const bankAccount&); // non-copyable
    bankAccount& operator=(const bankAccount&); // non-assignable

    int accountNumber;
    int accountBalance;
public:
    // public factory function:
    static std::shared_ptr<bankAccount> CreateNewAccount();
};

// in the cpp file:

std::shared_ptr<bankAccount> bankAccount::CreateNewAccount() {
    static int lastAccountNumber = 0;  // static 'last' account number.
    return std::shared_ptr<bankAccount>(new bankAccount(lastAccountNumber++));
};

And that's it. Because the variable lastAccountNumber is static within the factory function, it will only be initialized once, upon the first call to the factory function, and will conserve its value afterwards, incrementing at each new account being issued.

If you want more features, such as being able to look-up a particular account by number, then you need a class that handles the list of existing accounts. …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First, you should split the recursive function into two: the top-level function (that is called from main) and the function that is called recursively. You use the top-level function to make the initial call to the recursive function, and then report the result.

As so:

void recursiveSwap_impl(char myArrayR[], int left, int right)
{
    if (left == right)
        return;
    else
    {
        recursiveSwap_impl(&myArrayR[left], left + 1, right);
        if (myArrayR[left] >= 'A' && myArrayR[left] <= 'Z')
        {
            recursiveSwap_impl(&myArrayR[right], left, right - 1);
            if (myArrayR[right] >= 'a' && myArrayR[right] <= 'z')
                swap (myArrayR[left], myArrayR[right]);
        }
    }
    return;
}

void recursiveSwap(char myArrayR[], int randSize)
{

    recursiveSwap_impl(myArrayR, 0, randSize-1);

    cout << "Recursive Swap" << endl;
    for(int i = 0; i < randSize; i++)
    cout << myArrayR[i] << "\t";
    cout << endl;
    return;
}

That's the basic structure. I've fixed a few basic things in your recursive implementation, but the logic is wrong, and so is the logic of your iterative version too. Have you tried to run it? What your iterative version will do is find the first pair of letters and then it will loop infinitely by swapping the pair back and forth. To fix that, you need, at line 82, a statement of ++i; --j; to move the positions after the swap has been done. Also, the condition for looping should be while (i < j), because it is possible that i gets incremented while j gets decremented and they end up not being equal but that j is before i after …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Alas, my current laptop must go in to the shop for some overdue repairs. My power supply and/or connector are failing, my keyboard is chattering and three of my USB ports are intermittent. Also, my feet have fallen off (the little rubber ones) and my display is dimming. I put up with most of the problems but the power system thing was the final straw. Good thing it is under warranty (bought in 2008, extended twice). Next September it is time for a replacement system.

Wow, PETL must not like you (PETL: People for the Ethical Treatment of Laptops) ;) You must not be too kind with your laptops.

I have a 5 year old Toshiba laptop, I had to replace the keyboard once due to spilled coffee, and the firewire connection got busted on the same incident. For the rest, everything is still tip-top with it, even after having dragged it around for two years all over Europe. But, at that age (the computer, I mean), when I realized I could buy a new laptop that would be about 20 times better (hard-drive, CPU and memory) for a few hundred bucks, I figured it made no sense to wait and keep enduring its shortcomings, performance-wise. Now I just have it as backup, still works pretty good, after wiping off Windows in favor of Linux (Fedora).

Anyway, what this means is that I have to use an ancient IBM Thinkpad (circa 2003) for 8-10 days until I …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

“Never. Never ask for what ought to be offered.”
― Daniel Woodrell

Would you guys recommend me for a mod for Daniweb????

Be patient my friend. When I became a mod, I had about 4 times your post count, had been around for about 2 years, had about 100 times your reputation points and above 95% post-quality. You are on the right track, and we have certainly appreciated your contributions, and hope you will keep it up. But I couldn't say you have reached the point of being considered a candidate for promotion to mod status just yet.

I don't quite understand the strength of your desire to be a moderate. After all, all it means is that here and there you resolve those "flag bad post" issues, plus the occasional posts where you put your moderator hat on to moderate on a thread (or remind people of the rules), and the extra "powers" are few, and not to be abused anyways. And the "respect" you get is something that comes for the quality of your posts / contributions, the mod status has little to do with that.

pritaeas commented: Well put. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

To be honest, I'm not sure that what I was writing is as big as a full tutorial

My tutorials are not a good example length-wise. They are much too long, and I would mostly probably not write them as long if I were to do them again.

As I said, I'll take some more time to get something more complete together.

Just from curiosity, what is it about?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A while back, as part of the newsletters, there was an invitation to write tutorials and the instructions were given more or less as follows: post a normal "Discussion Thread", and then send a PM to the relevant moderators and admins asking to turn it into a "Tutorial", if they deem it worthy. Or you can also write it first (not as a thread, but just as a text document), send it via PM to mods and admins, and then, if accepted, possibly with suggested modifications, post it as a "Discussion Thread" and notify the admin(s) about it so it can be turned into a tutorial. That's what I did for the two tutorials I wrote in the C++ forum. As a mod, I don't seem to have the power to turn an article into a tutorial, only admins can do that. For the C++ forum specifically, the relevant mods that can judge your tutorial would be mainly myself, deceptikon, ~s.o.s~, and WaltP. As for who can finally turn it into a "tutorial", I guess it has to be happygeek (or admins like Dani, deceptikon or ~s.o.s~).

I guess it could be worth mentioning somewhere in the page for creating an article how to submit a new tutorial. Another little thing to add to the "to-do" list for Dani and deceptikon.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First hit on google: makefile tutorial. Do you have any specific question?

Remember that makefiles are space-tab sensitive (a very annoying aspect of makefile syntax), meaning that the number of spaces and tabs (and where tabs appear rather than spaces) is important.

Frankly, writing makefiles directly by hand is out-right torture. Most people use something else to generate the makefiles (and that's what most IDEs do under-the-hood too). I recommend something like CMake which is just as light-weight as make, but has a much simpler and more intuitive syntax. It is good to be familiar with makefile syntax mostly as a skill for being able to diagnose problems (read makefiles) and do some quick fixes in the makefiles, but actually writing a makefile from scratch is rather unusual, even the more masochistic people use a tool like autoconf (which is only marginally better than makefiles).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For those who like bizarre and reflective black / absurd humour, you need to see "Du Levande" (You, the Living). This is by Roy Andersson, an enigmatic Swedish writer-director who made very few movies (not his full-time job) but always weird, poetic atmosphere films with a touch of black / absurd humour.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If we're going down the road of French movies, there are plenty of good ones. Of course, the two-part classic "Jean de Florette" and "Manon des sources" is a must see classical epic. As for Luc Besson, his best film is no doubt Léon (The Professional), that's a real piece of art. For the others, Luc Besson's films are always entertaining with a touch of humor, definitely a good writer and a decent director, but he's definitely camped in the fun-action movie genre.

Le dîner de cons is really a hilarious movie (it helps if you understand French, of course). It is one of many on a long list of very funny French movies stained by a terrible American remake (other notables on the list include "Taxi", "3 hommes et un couffin" (Three Men and a Cradle), etc.).

"Tell No One" looks interesting. I definitely know Guillaume Canet's face (although he's not a memorable actor, better at directing, I hope) for going from being married to Diane Krüger to hooking up with Marion Cotillard... lucky bastard. Anyways, I'll probably check out that movie, partly because I love Marie-Josée Croze, she's a great actress and her movies are always good (especially Maelström).

Of course, in the French department, you can't forget Amélie, the ultimate feel-good movie.

On the French-canadian side, some notables are "Bon Cop, Bad Cop" (especially funny to canadians), "C.R.A.Z.Y.", and "

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"Ben X" was a pretty good Flemish movie (Dutch-Belgian).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are plenty of options, from the most basic to more sophisticated setups. As a beginner, you will want something down the middle, like a simple program that you can install and it will allow you to code small applications and hit a "play" button to compile-and-run your code. What fits best to that description is CodeBlocks, and make sure the download the version that has "MinGW" (which is the compiler, because technically-speaking, CodeBlocks is just the IDE software (IDE: Integrated Development Environment)).

On the more minimalistic side, you can simply install MinGW alone and learn to use it through the command-line (via MSYS or cmd.exe, or PowerShell). Then, you can edit the code itself with either a light-weight IDE like CodeLite or Geany, an enhanced text editor like Notepad++, emacs, or vim, or even with a plain text editor like Notepad. You'd be surprised how many experienced programmers use this kind of lean setup (command-line compiler + build scripts + enhanced text editor).

On the more sophisticated side, especially if you plan to do GUI programming, then you probably want to go for a heavier option. You can get Microsoft's Visual Studio Express editions for free, which is fully-featured for all practical purposes (missing / reduced features are things a beginner wouldn't need anyways). However, Visual Studio has a steep learning curve and its GUI programming facilities are pretty bad. For GUI programming specifically, I recommend using QtCreator, which is a pretty good IDE with a great …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, there's a bit of a learning curve if your accustom to gcc compiler.

I'll be the first to admit that my acquaintance with VS products is shallow at best, and that is part responsible for why most of my experiences with it have been a nightmare. That said, I have very little incentive to walk up that learning curve. The two main sales-points of VC++ are the powerful debugger and Intellisense. A debugger is not something I need, it's been years past since I last used one. And Intellisense is pathetic at best. As soon as your project has 20-30 LOCs or more, Intellisense crumbles like an old man trying out for an Olympic hurdle race. So, I'm left with a terrible compiler (for C++), an insufferable build-system and an extremely heavy-weight text editor (and even bad at that). To me, it doesn't feel like climbing up a learning curve but rather falling down a rabbit hole. End of rant.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I know that VS 2008 and 2010 can co-exist pretty well. But that's all I can really tell you, I haven't used or installed 2012 yet (and I recently had to do some work with VS 2010, which was extremely frustrating, as usual with VS products, so I'll need a cool down period before I can consider using VS 2012).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It might help you to know that "sorting using priority queue" is pretty much a synonym for an algorithm called Heap Sort. A priority queue is almost invariably implemented as a heap data structure. And once you have the elements arranged in a heap structure, it is a simple matter to construct the sorted array from it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I just watched "Let the Right One In" (Låt den rätte komma in). It was a pretty special vampire movie. I definitely recommend it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As nmaillet said, pick a sorting algorithm and try to implement it. The wiki page has a pretty complete list. You should probably start with one of the simpler methods (the O(n^2) algorithms) like Bubble Sort, Insertion Sort, or Selection Sort. As for reproducing the behavior of the standard sort algorithm, it uses an intro-sort algorithm, which requires that you also implement quick-sort and heap-sort. Merge-sort is another interesting alternative.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

how do you make your cool or stress free in exams days ?

Humm.. this might sound boring but... How about being well prepared? For the most part, I made sure to sustain a genuine interest in the course material during the term (which just requires an open mind and general curiosity, both pre-requisites to learning). That compelled me to integrate the material as well as to seek to get a good grasp of it as the term progresses. I went to class, did the assignments / exercises (and did them myself), and took notes. When the exam came, I usually studied a few hours the days before the exam (maybe up to 10 hours in the worst cases). On the exam day, if you are confident in your grasp of the course material, and in your ability to solve the problems given, then there is no "bad stress", just the good old adrenaline rush of having to deliver it in time, which is generally helpful. So, the key is confidence, which is built only by working towards being prepared, which is something that you can't do at the last minute.

Of course, this doesn't really apply to "learn-by-heart" courses, only to engineering-style courses where all that matters is your ability to solve problems, and this is partly why I went to engineering school in the first place.

There have been a few exceptions for me, and in most cases, it was a matter of not being prepared, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I understand that iostream.h is outdated and Microsoft Visual Studio has iostream.

The headers like <iostream.h> are pre-standard (from before C++ was standardized), so it dates back to the 90s or earlier. Most compilers probably still support it for legacy reasons, but they don't have to, and some don't.

However, When I remove the ".h" the setf, ios, etc, line 6 has multiple errors.

This is because of namespaces. The old ".h" version of the I/O stream library pre-dates the introduction of namespaces into the C++ language. Since standardization in the 90s, all components (classes, constants and functions) are required to be within the std namespace, before that, they were in the global namespace.

Also, these functions you refer to, like "setf", "setw" or "setprecision", are technically obtained from the standard header <iomanip>, not <iostream> (although on many compilers, including <iostream> also includes <iomanip>).

But, what does that code mean?
What does namespace actually mean? What does it do?

A namespace is, as its name implies, a space for names. The idea is simple. If you have all these standard libraries, plus your own code, plus the entirety of the codes and libraries that you could grab from the internet, it is pretty much inevitable that there would be name conflicts between them (e.g., two separate libraries choosing to use the same name for a component that they provide). In the old days (i.e., in older languages that don't have namespaces), people would …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Will running it on a Linux server be a good idea since I have hardware in my plans?

Yes, hands down. That is, if you have a choice in the matter.

I guess I would want blocking since I don't want program B to tell the hardware to do anything when program A is writing into the file. Would I still use piping/sockets for this?

Yes, exactly. With blocking operations, what happens is that the program that reads (usually called the "consumer") will block (wait) until there is something new on its input stream before returning from the reading operation, and subsequently processing the data received. Setting things up correctly on the writer side (usually called the "producer") will make it so that the consumer won't start processing input until the producer wrote all that is needed.

Here is a simple example of writing two programs (producer-consumer) and piping them with each other:

producer.cpp:

#include <iostream>
#include <cmath>     // needed for sin() function.
#include <unistd.h>  // needed for Linux usleep() function.
using namespace std;

int main() {
  // output the result of a sine-wave in time:
  double t = 0.0;
  while(t < 10.0) {  // output for 10 seconds.
    cout << (1.0 + sin(10.0 * t)) << endl;  // output to standard-output (cout)
    usleep(10000);
    t += 0.01;
  };
  return 0;
};

consumer.cpp:

#include <iostream>
#include <iomanip>
using namespace std;

int main() {
  // calculate and print out a running average of the input …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Happy birthday Dani!

I haven't tested my typing speed for a long time (the last time I was probably in highshool). Took your test, I scored at a rather unimpressive 50 WPM. I think having to read off what I need to write next throws me off a bit (I'm a very slow reader, borderline dyslexic on that front).

But nowadays does typing consider as a skill anymore to find employment?

Probably not so much. The main reason for wanting to be a super-fast at typing in the old days was for secretaries that would take dictation or transcribe memos, and so on. There are probably still some niche places where fast typing is a valuable skill (e.g., stenography). But I don't think that many jobs today require enough intensive typing that the WPM speed would actually make a noticeable difference. I think the main limiting factors are how fast your mind can supply the words (or code) and the quality of what comes out, not so much the speed of your fingers.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What you can do is add an additional parameter to the printing functions, one that takes an output stream by reference (ostream&). And you can also make cout the default stream, as so:

void print_height (players_rec players[], int howmany, int when_ft, int when_inch, ostream& dest = cout);
void print_year (players_rec list[], int howmany, string& year, ostream& dest = cout);
void print_weight (players_rec list[], int howmany,int maxweight,int minweight, ostream& dest = cout);
void program_info(ostream& dest = cout);

And then you implement them in terms of that given output stream, as so for example:

void program_info(ostream& dest)
{
    dest << endl << endl;
    dest << "Kellen_B_Prj4.cpp" << endl;
    dest << "Programmer: Kellen Berry" << endl;
    dest << "Date: 11/11/2012" << endl;
    dest << "CRN 14086" << endl << endl;
}

So, by default, all those print functions will do the same as they do now (print to cout), but if you pass them an output stream, they will output to that instead.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you are under a Unix/Linux environment, this is exactly what pipelines are for, and it is exactly what they do. In Windows, there is this "equivalent" (not really equivalent because it is an extremely convoluted and annoyingly verbose alternative, but it technically does the same thing).

Otherwise, a good alternative is to use sockets on a loopback connection (IP address: 127.0.0.1). It's a little bit more trouble than just using piping, but it is a bit more convenient (there are more features / options when creating a socket).

In both cases, with "blocking" behavior (which is the default, unless you specify that you want a non-blocking socket or pipe), you will have the "interruption" behaviour that you want. By the way, in computer science, the word "interrupt" means something quite a bit different, the term you ought to be using is "blocking" (versus "non-blocking").

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

build-esential was already installed. I reinstalled it but still not work.

Make sure that you purge it before reinstalling:

$ sudo apt-get remove --purge build-essential
$ sudo apt-get install build-essential

Other than that, your problem is a real mystery to me. You might have a better chance on a Linux Mint forum or bug reporting system.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are a few different issues at play here. The basic guideline of "don't duplicate code" is mostly a consequence (or corollary) of the fact that more code equals more opportunities for bugs. And even purely copy-pasted code is still subject to that problem, especially if you factor in a (past or future) history of bug fixing (fix a bug in one place, forget to do it in the duplicate, etc.). That's a good rule to have on a day-to-day basis in reducing redundant code. That's really the core of the argument against code duplication, and it is a good argument, any experienced programmer will tell you that for sure.

Now, nothing is black and white. And here comes the issue of maintenance of production code and interfaces. First of all, the preventive measures to solve all those problems you described are simple and widely used in production code. They start by a precise and carefully designed specifications for the behaviour of the functions that make up your code (or library of functions), in other words, an API specification (possibly only internal to the company). Generally speaking, when the expected behaviours of the functions (including all erroneous or exceptional cases) are well specified, then that is all you need to fullfill by their implementation, and any bug-fixing to the code that doesn't make a difference in the specified observable behaviour is OK. Then, you need unit-tests and qualification tests that verify that the correct behaviour is observed when running these …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Currently, MinGW has version 4.7.0. There is also the TDM-GCC distribution, which is a bit more up-stream at version 4.7.1. As far as I know, there are not too many differences between 4.7.0 to 4.7.2, so you'd probably be ok with either ones. This is probably easier to live with that 6-months lag than trying to build GCC from source on Windows.

On Linux, if you have the latest ubuntu distribution 12.10 (or any sister distro), then you will find 4.7.2 in the official repositories. Otherwise, I'm sure you can find a PPA repository that carries it, or hook to upstream repositories. But this kind of depends on your distro. Personally, I keep a rolling build of GCC hooked to the svn repository. Building the complete GCC suite takes a couple of hours on a decent machine, and the process is a breeze in Linux (just checkout the svn repo, and enter a few commands to configure-build-install it, you just have to wait a little while). I update and rebuild about every month or so.

Just a warning, if you use Boost libraries, make sure you use version 1.49 or later, because earlier versions don't work with g++ 4.7.* compilers due to some technical detail in the defaulting of move- and copy-constructors.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

See this C++ FAQLite entry.

Long story short, the definition (implementation) of all templates must be visible in all translation units that use it. Best way to achieve that is to put it in the header file. You can still separate the declaration and definition (but still both in the header file), if you want a cleaner set of declarations.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Being a geek and all, here are some command-lines to fix the World's problems:

The anarchists would say:

$ sudo reboot

The conservatives would say:

> chkdsk /r

The progressives would say:

$ sudo apt-get update

The neo-cons would say:

$ srm -R any_country@middle.east:/*

Christian fundamentalists would say:

> FORMAT C: /FS:Jesus

And the Muslim fundamentalists would say:

> FORMAT C: /FS:Allah
Reverend Jim commented: Great! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

That makes no sense to me. First, utilizing the outer product doesn't make the inner loops disappear, it simply masks them inside a function (or product operation), but they still occur just the same. Second, it would be difficult to implement this outer product mechanism without losing some performance of the overall algorithm. And for such a simple algorithm as Cholesky (which fits in 10 lines of code or so), it doesn't make much sense to do it in layers like that (i.e., one outer loop that calls some outer product function, etc.). So, why exactly do you want to use an outer product inside the loop?

About parallelism, it doesn't really make sense to use parallelism unless the dimensions are very large, because of the overhead of parallelizing it. And if the dimensions were that big, you wouldn't be using a vanilla implementation of Cholesky and a vanilla storage strategy for the matrix anyways.

Btw, about your code, you should not use pow(a, 0.5); where you could instead use sqrt(a); because the latter is clearer and more efficient. Also, you need to detect zero elements on the diagonal, otherwise you will have divide-by-zero problems. Other than that, it looks good.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is the purpose of static member functions in a class?

Well, others have already pointed out one kind of purpose for static member functions. But there are many others too. In other words, there are many "tricks" that rely on a static member function(s). So, it's pretty hard to pinpoint a single purpose for them. Here is a list of the purposes that I can think of off the top of my head:

  • Access and manipulate some static data members (i.e., values globally associated to the class, not the individual instances, like instance counts, global parameters, etc.).
  • Write factory functions (or "named constructors") that allow you to create objects of the given class through a static member function (which has access to (private) constructors). This can be useful for many reasons, like:

    • Allocating the new objects through some alternate scheme (e.g., like a memory pool).
    • Enforcing the use of a smart-pointer to the objects (e.g., std::unique_ptr or std::shared_ptr) by making it impossible to create objects outside of the static "factory" functions.
    • Registering some information about created objects in some global repository / database (e.g., a garbage collection scheme, or just general system-wide lookups of objects in existence).
    • Enforcing a "single instance" rule in what is called a Singleton pattern.
    • Simply providing many alternative construction methods which have distinct names, promoting clarity as to what this particular construction method does (i.e., instead of relying on overloading of the constructors, which can be ambiguous and unclear as to …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Refer to the PoDoFo mailing list archive. Specifically, this thread which talks about your exact problem. For specific problems about building PoDoFo, they are probably better equipped to help you.

I actually just checked out the source for PoDoFo and built the library. There were a few annoying hurdles, and the build is very dirty (full of warnings), but successful. It seems that the main problem is that they use a rather misconstruded cmake script for the build. They provide a number of modules (in ./cmake/modules) to find some of the external dependencies. However, these are outdated and buggy, so it's better to use the modules installed with cmake for those external dependencies that have one (you can do so by simply renaming the Find*.cmake file in question to something else that won't be picked up by cmake). On top of that, there is a bug in their top-level CMakeLists.txt at Line 30, they override the CMAKE_MODULE_PATH which is a terrible thing to do, the line should be replaced by either:

SET(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH}" "${CMAKE_CURRENT_SOURCE_DIR}/cmake/modules")

or

SET(CMAKE_MODULE_PATH "${CMAKE_ROOT}/Modules" "${CMAKE_CURRENT_SOURCE_DIR}/cmake/modules")

Also, they use the old-style variable names like *_INCLUDE_DIR, as opposed to the new style *_INCLUDE_DIRS (e.g., they use ${FREETYPE_INCLUDE_DIR} instead of ${FREETYPE_INCLUDE_DIRS}), which is pretty outdated, too outdated for many of the more modern package-finding modules to provide them as backward compatibility (that may be why they provide their own outdated modules).

So, again, direct your build problems to the maintainers of that library, and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Open the top-level "CMakeLists.txt" file. Find the line with the command INCLUDE_DIRECTORIES. And add to that list whatever additional directories required to find those header files, between quotation marks.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It would help a lot if you pointed out which specific programming languages are involved (and platforms). Generally speaking, it will depend on a few different aspects.

First of all, if your current code is built in a very monolithic way (i.e., one solid block, with no "removable" parts), then you are in trouble (and probably have been for quite some time since writing and maintaining monolithic code is a nightmare from start to finish). But, more moderately, if your code is just too intertwined to be able to make any kind of easy splits into modules (at least, conceptually speaking), then you should simply pack up what you have as some sort of "core" or monolithic kernel to your code-base, and then start writing the new code in a modular fashion. This will certainly involve writing some interfaces and adaptors to blend to old and new codes, but, in the future, you'll be happy that you did so.

On the other hand, if you do want to make the current code modular, at minimal cost (effort), of course, then that's a more interesting discussion. Assuming that your current code is object-oriented (or somewhat analogous to it) for the most part of it, then you already have some structure to work with. The golden word in modular software is independence, i.e., minimizing inter-dependencies between different parts of the software such that some chunks of it can be reasonably isolated from other parts. You first make these splits conceptually (ask yourself …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is probably with the interaction of non-const references and the function binding library, they don't always play nice together (although they should, IMO). Try using a reference-wrapper. Something like this:

threads.push_back(thread (run_es_pval, start,  end, rankedSize, std::ref(datasetNames), std::ref(datasets), std::ref(dicoArray), es_threshold));
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, good luck to our NY daniwebbers! Even north of the border, safe in my apartment thirteen floors up above downtown Montreal, the winds are whistling really strong. As long as the Daniweb servers don't get blacked out, the world won't come to an end. ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Is there a country I should add to the list for future surveys?

You probably could just use different continents or sub-continents instead. With about a dozen sub-continents (North America, Latin America, Western Europe, Eastern Europe, Northern Africa, Southern Africa, South Asia, Middle East, East Asia, South-east Asia, and Oceania), it'll be manageable, and representative.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just to add a remark, the more typical (hassle-free and safe) implementation of a singleton class is as follows:

// in header:

class Singleton 
{
    private: 
        Singleton() {
          // ...
        }
        Singleton(const Singleton&); // no implementation. non-copyable
        Singleton& operator=(const Singleton&); // no implementation. non-copyable
    public:
        static Singleton& getInstance();  // get instance by reference
        Animal *method(string something);
};

// in cpp file:

Singleton& Singleton::getInstance() {
    static Singleton inst; // just a single static instance of the Singleton object.
    return inst;
}
//...
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@Otagomark

haw03013's claim is that his Java program finds 105 million prime numbers in 15.1 seconds. That's not that wild of a claim. I just took vijayan121's Sieve of Sundaram implementation, optimized it a little, ran it, and timed it for 105 million. I got 29.7 seconds, so it's not that far off. According to the link vijayan121 gave, the Sieve of Atkin implementation claims 8 seconds for primes up to 1 billion (i.e., N = 1,000,000,000), with the Sieve of Sundaram implementation, I got just about 13 seconds for the same feat. haw03013's algorithm, whatever it is, is certainly astoundingly fast, but it is within reason.

BTW, here is the "optimized" implementation of the Sieve of Sundaram:

#include <vector>
#include <iostream>

// sieve of sundaram
void generate_primes( int N ) {
    std::vector<bool> sieve( N >>= 1, true ) ;
    for(int j = 1, ii = 3; j < N; j += ((ii += 2) ^ 1) << 1 )
        for(int k = j; k < N; k += ii )
            sieve[k] = false;

    std::cout << 2 << '\n';
    for( int i = 1 ; i < N; ++i ) 
        if( sieve[i] ) std::cout << ( (i << 1) | 1 ) << '\n';
}

int main() {
    generate_primes(1000000000) ;
    std::cout << std::endl ;
}

When timing it, you must remove the printing code, of course. On my PC, it will run just under 13 seconds without printing, and about 21 seconds with printing. With N = 2147483647 …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It is definitely not a compilation problem. The output you posted in only cmake output given during configuration (compilation output looks very different). Also, it does say "Configuring incomplete, errors occurred!" and above it, the only Error found is this one:

CMake Error at cmake/modules/FindPNG.cmake:70 (include):
  include could not find load file:
    C:/podofo-0.9.1/cmake/modules/FindPackageHandleStandardArgs.cmake

Call Stack (most recent call first):
  CMakeLists.txt:332 (FIND_PACKAGE)
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Oh, right, I didn't notice that at first. The error is about the missing FindPackageHandleStandardArgs.cmake module. Locate it on your computer or download it, and place it in the correct directory, that should fix it. If the other dependencies are optional, then there is nothing else you need to do.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The simple fix is to take the template <class T> and put it just before the class template declaration. Also, you need to specify the type (for T) when creating an object, but you don't need it when calling its functions. As so:

template <class T>
class calc
{
  public:
    T multiply(T x, T y);
    T add(T x, T y);
};
template <class T> T calc<T>::multiply(T x,T y)
{
  return x*y;
}
template <class T> T calc<T>::add(T x, T y)
{
  return x+y;
}

int main ()
{
  calc <int> c1;   // specify T = int
  int i = 5, j = 6, k;
  long x = 40, y = 20, z;
  k = c1.multiply(i,j);
  z = c1.add(x,y);
  cout << k << endl;
  cout << z << endl;
  return 0;
}
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It seems to work. Now you just need to install the external libraries that are missing: tiff, cppunit, fontconfig, and lua50 / lua. Afterwards, rerun cmake and hopefully it will work.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As deceptikon said, mixing new/delete and new[]/delete[] is undefined behavior because there is no requirement for the implementation (compiler + standard libraries) to use the same underlying mechanism for both variants. That's why it is undefined behavior, which just means that there is nothing in the C++ standard that defines the behavior that such code should produce, which means, you can't reasonably expect anything predictable out of that code. In addition, you can easily overload those operators and create your own allocation mechanisms that would make the mixing of the two variants result in a crash (or other corruption).

I thought delete[] just loops through everything and calls delete?

That's not true at all. The main difference between delete and delete[] is that the former expects a single object to exist at the given address, and thus, calls the destructor on that object alone. While, on the other hand, the delete[] operator expects an array of objects to exist starting from the given address, has some kind of mechanism to figure out how many there are (probably something like asking the heap about the size of that memory block and dividing that by the size of the class (or type)), and then, it calls the destructor on each object individually. Only after the calls to the destructors, the memory gets deallocated ("deleted"). Most implementations would probably implement the memory deallocation in exactly the same way for both the delete and delete[] operators, but again, that's not a guarantee, …