mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Really? You had the same problem as the OP? You mean the problem that is triggered when someone sends you a message on your shoutbox.... And your shoutbox is currently empty, because nobody ever sent you a message... I guess being a troll is a lonely affair.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm also really excited by CERN's discovery of the Force that binds the galaxy together.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster
Slavi commented: Love the bonus points! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You might also want to check out this page, it seems to have a pretty comprehensive description of many different graph generation algorithms depending on the desired probability distributions.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I must confess I don't understand the talk of "vertices at random" or the point of "random-shuffling them.

Say you have the following vertices:

1 2 3 4 5

Then, let's say you generate a "random edge" (by picking to vertices at random) and it turns out to be 5 -> 3.

My point with random shuffling is that if you to a random shuffle (such as using the std::random_shuffle algorithm in C++, and I'm sure any other respectable language would have an equivalent to it in its standard library), you might get something like this:

5 3 1 4 2

And in this case, my method of just connecting consecutive vertices (in the shuffled order) gives you, as a first edge, the edge 5 -> 3. Is this less random? No. Is this more efficient? Yes, because you do the shuffle once, and then, you can get N-1 random edges that are just as random as those you would generate from the first method. The point is that all the necessary "randomness" is created by the shuffle and the rest can be done deterministically (and thus, with a guaranteed completeness (and efficiency)).

Of course, like I said before, that particular way of creating edges will create one long chain of vertices (single-branch tree). Which I understand is not what you want, but the main point here is that this idea of doing the random shuffling (or similar) as the initial randomizing …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You didn't understand my algorithm at all. I'm working under the assumption that you are generating the vertices at random. If you are starting with a set of vertices that you cannot choose, then the problem is different and my algorithm does not apply directly (but you can fix that by creating a random-shuffle index map to create a randomly shuffled "view" of the vertices you have, which adds O(N) storage of integers, which is not too bad).

Here is the algorithm I was describing (in the style of the C++ Boost.Graph library, which is the only serious graph-theory library in any language, IMHO):

template <typename Graph, 
          typename RandEngine,
          typename GenVProp, 
          typename GenEProp>
void generate_connected_graph(Graph& g,
                              RandEngine& rng,
                              GenVProp gen_vprop, 
                              GenEProp gen_eprop, 
                              std::size_t N,
                              std::size_t M) {
  typedef typename boost::graph_traits<Graph>::vertex_descriptor VDesc;
  typedef typename boost::graph_traits<Graph>::vertex_iterator VIter;
  typedef typename boost::graph_traits<Graph>::edge_descriptor EDesc;

  assert(M > N);

  // Generate N random unconnected vertices:
  for(std::size_t i = 0; i < N; ++i)
    add_vertex(gen_vprop(), g);

  // Generate N-1 edges to connect all vertices:
  VIter vi, vi_end;
  boost::tie(vi, vi_end) = vertices(g);
  while(vi + 1 < vi_end) {
    add_edge(*vi, *(vi+1), gen_eprop(), g);
    ++vi;
  };

  // Generate M-N+1 random edges:
  for(std::size_t i = 0; i < M-N+1; ++i) {
    VDesc u = rng() % N;
    VDesc v = rng() % N;
    while( u == v )
      v = rng() % N;
    add_edge(u, v, gen_eprop(), g);
  };

};

This is O(N+M). And if you have objections about the fact that the first edge-generating loop is not "random" …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The same for me, I see the red and blue pills... not gray. I think the effect is stronger when you take your focus off the images (so they get blurry).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The graph is being generated by selecting edges to add randomly and as a result many of the edges generated do not connect two components and thus must be discarded.

I guess the problems lies in your conception of "randomly". This is an extremely vague term. I assume you mean that you would generate an edge by picking a source vertex and a target vertex, each through uniform sampling of all vertices of the graph, and then reject it if it doesn't connect two components. This is pretty much the worst way of solving this problem, and I don't understand why you would do that. If that's not what you mean, then you must clarify further.

How many edges are expected to be generated, including discarded edges, before only one component remains?

I don't really want to answer this question, because it smells like a homework question. Usually, when you want to figure out what is the complexity or expected run-time cost of the worst possible strategy, it's usually a homework question, because who else would care about knowing this?

Also, there are a number of additional things that are needed to really be able to answer this question, especially the distribution of vertices between components (which I guess you could assume to be N/d on average, but that's kind of meaningless).

And by the way, the worst case is infinity, because it's a probabilistic algorithm, in case you didn't know.

Also, is there a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

These are hexadecimal values, not characters. If you try to read them as characters, you will end up with the character array containing something like this:

c[16] = {'4', 'C', '6', '1', ' ', '6', ..., 'C'}

You need to read them as hexadecimal numbers. Here is how you do that:

unsigned int temp;
aIStream >> std::hex >> temp;
// Now, temp contains the two-byte value:
c[i]   = temp & 0xFF;         // first byte
c[i+1] = (temp >> 8) & 0xFF;  // second byte

That's basically how it's done.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Is there a result that states how many edges I need to generate before the graph becomes connected?

Well... one hint to this might be found in the definition of a spanning tree. The only question remaining is, how many edges does a tree with N vertices have? (this is a pretty elementary question, but if you don't know it, just grab a pen and paper and draw some trees, and count the vertices vs. edges in them, you should quickly be able to figure out the pattern)

Is there another approach that guarentees termination in a reasonable time (O(nlog n) or better)?

If you have no particular restrictions (e.g., like about which vertex to connect to which), then you can do this is linear time, with no extra storage. Here is a hint: what if all the vertices formed one long single branch... how would you create those edges?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think there is at least some hope. For example, there are some very serious people who have worked for years to come up with a solution that allows you to cope with the end-of-life of Win Server 2003 without any security risks, they put up clear and detailed instructions for setting up their ingenious solution.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I believe that the problem is that your lines 35-36:

sumxto2 = pow(sumx2, 2);
sumyto2 = pow(sumy2, 2);

should be as follows:

sumxto2 = pow(sumx, 2);
sumyto2 = pow(sumy, 2);

note the use of sumx and sumy instead of the sumx2 and sumy2.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@diafol

Christy Moore

Very nice. It makes me think of Cornelis Vreeswijk, a traditional (and funny) swedish folk singer, you might want to check that out.

Tony Joe White

Awesome! Thanks for that shout out, I hadn't discovered any dope shit in the blues genre for a while. The last time was probably a few years ago with Seasick Steve's hick-blues, check it out if you haven't yet.

I also listen to a whole bunch of different styles, pretty much all over the spectrum.

In the spirit of being on an IT forum, when coding, here is my playlist filter I tend to use nowadays:

"Dance" OR "Funk" OR "Electronic" OR "Rap" OR "Hip Hop" OR "Soul" OR "R&B"

I find those tempos and beats to be most conductive to intense coding.

Here are some of my reliable favorites from various genres:

Grimes (atmospheric-electronic)
Brother Ali (funky rap / hip hop)
Scandinavian Music Group (atmospheric-folk)
Carolina Chocolate Drops (old-time folk, "genuine negro jig")
Kandle (folk-rock / pop-blues)
Eivor Palsdottir (cannot be described with the words of mere mortals)
Meiko Kaji (traditional japanese)
Everlast (rap-blues)
Notorious B.I.G. (rap god.. for those living under a rock)
Janelle Monae (space R&B)
Daniel Bélanger (french lyrical soft/pop)
Radio Radio (unhinged hip hop)
CeeLo Green / Gnarls Barkley

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would guess that the errors you got prior to this one prevented the "podofo.lib" library from getting built correctly, which is why it wasn't found later on when it was needed (by the "CreationTest" program).

When it comes to dealing with compilation or linking errors, the rule is to always start from the first error that comes up. The errors that appear last are usually caused by the errors that came before. So, always start from the top, and don't worry about last errors until you've fixed all the errors before that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I command you for your enthusiasm with this project of yours. But Schol-R-LEA is right, you are getting dangerously close to spamming with these posts. I deleted the other duplicate posts you made.

About your project.... I'm afraid you have a lot to learn. It pains me to say that because you clearly put a lot of effort into it, but I just can't let you go on believing in your own illusions.

Nearly everything you said in your post is wrong.

For the c++ program, map is used everywhere.

That's wrong. The std::map class is one of the least used C++ containers in production code.

And bottleneck of program performance is often the performance of map.

Nobody would use std::map for heavy-lifting or high-performance code.

Especially in the case of large data

For large amounts of data, people use sophisticated data base engines that are highly tuned for performance. The std::map class is a simple class for simple, small and non-demanding applications, every professional knows that.

unable to realize the data distribution and parallel processing condition

To enable parallel processing, the key is to segment the data into partitions of elements that avoid false sharing on the memory architecture and find suitable ways to resolve data-races (such as fine-grained locks, lock-free schemes, or a hybrid like software transactional memory).

The map of STL library using binary chop, its has the worst performance.

That's not true. The typical std::map implementation …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would also add that, as some have pointed out, it might be useful to be able to see a vector as a matrix and vice versa, for certain operations. If this is needed, the preferred approach is to use adaptors which have the advantage of being more explicit about the decision to treat a vector as a matrix and such, which prevents certain pitfalls that arise from implicit conversions that may not be what the user wants or expects. Common adaptors are "matrix view of a vector" (treat a vector as a Nx1 or 1xN matrix, or a NxN diagonal matrix), "matrix view of matrix" (treat a part of a larger matrix as a smaller matrix), and "matrix slice" (treat a row or column of a matrix as a vector). You can see those types of adaptors in most linear algebra libraries (like all those I linked to, including mine).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

On lines 49 and 67, you probably just forgot the std:: when you used string. And the "C++ forbids declaration of line with no type" is probably due to the error above it (that it doesn't recognize 'string' as a type, and therefore, 'line' has no type).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that it makes some sense to consider a Vector as a subclass of (derived from, special kind of) Matrix. For example, Matlab creates vectors as matrices (as Nx1), and treats any matrix with one dimension that is 1 as a vector, i.e., it doesn't even "subclass" them, it simply creates a Nx1 matrix when you create a vector. This is an even stronger approach, by saying not only that a vector is a kind of matrix, but by saying that a vector is just a matrix.

But this "vectors are matrices" approach is convenient but it also has its drawbacks. Mathematically, vectors have a lot more useful properties than matrices, and these become awkward to express when you use this approach. Just look at some of the more awkward Matlab functions that try to deal with this problem, e.g., particularly bad examples is the norm or max functions, where you have a single function that has different behaviours and parameters depending on the type (vector or matrix) of the "matrix" involved. This kind of stuff creates unnecessarily complicated interfaces because all this stuff has to be (1) documented in details and (2) checked at run-time, and you need to figure out if there are any reasonable fallbacks or merges to be done (e.g., is the Frobenius norm of a Nx1 matrix the same as the Euclidean norm?).

I would not recommend doing what Matlab has done, as I consider it one of its design …

ddanbe commented: You make me happy! +15
JamesCherrill commented: Outstanding contribution. +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The C standard headers time.h and stdlib.h should not be used in C++ code. You need to using the C++ versions of these headers, which are included with #include <ctime> and #include <cstdlib>, respectively. Then, any function from these headers must also be prefixed with std:: like all other C++ standard library classes and functions.

Then, the _strdate and _strtime functions are not standard C/C++ functions. They are C functions provided by old Microsoft headers (included by time.h), and you should not use them if you want to write portable code (e.g., be able to use another compiler or OS beside Microsoft or Borland). The standard C++ equivalent of that code is this for example (using the strftime function):

char buf[100];
/* Print today's date and time, e.g. "Thu Aug 23 14:55:02 2001". */
std::time_t mytime = std::time(NULL);
std::strftime(buf, 100, "%c", std::localtime(&mytime));
return std::string(buf);

If you look at the documentation for strftime that I linked to, you will find further options for formatting the date / time printout.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The OpenGL packages might have different names on your Linux distribution (I gave the names for Ubuntu). You should use your software center (or equivalent) and search for "OpenGL" and install the "dev" packages that seem to be the main OpenGL (gl, glew, glu) library packages.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

do take note, that's the trend of what's most popular, not what's most powerful.

Also, take note that those figures of "popularity index" are actually a measure of the google searches for tutorials on the language. All this is saying is that 1 in 4 people who are currently taking their very first steps on the road to learning a programming language are learning Java (because it's only in the first steps that you spend significant time looking up and reading tutorials). And after learning your first language, the others are much quicker to learn, i.e., you probably will spend 10 times more time going through tutorials in your first language than you will in any other subsequent one you learn.

The reason why Java, PHP, Python and C# are the highest on that list is simply because those are the most popular languages for teaching computer science. In other words, it's a measure of which languages computer science teachers prefer to teach in, nothing more. Considering that it takes about 10 years of experience to become an expert programmer, I wonder how many experts are still spending a significant amount of their time reading Java tutorials... (rhetorical) ...they should be writing them at this point.

Also, remember that computer science teachers are not interested in teaching computer programming. They teach theoretical algorithms and data structures, and they prefer to give exercises in languages that are easiest to use without any real skill, like Java / C# / …

Tcll commented: well said +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How shocking... (roll eyes)

I remember how Microsoft sold secureboot to the world by promising that it would always be possible to switch it off, and thus, not locking owners out of the hardware they paid for. How long did that promise last? For a few years and for the duration of one version of Windows (8-8.1), by my reckoning. I guess that's the extent of the respect Microsoft has for its customers.

Btw, in my opinion, secureboot is one of those over-the-top security features that 99.9% of people don't need, but a great deal of people suffer for it. There are real attacks that secureboot protects against, but those are really hardcore attacks that only the most sensitive organizations have a legitimate reason to fear and protect themselves against. When the average Windows user often has single-user login with no password, no hard-drive encryption, a very lax firewall (if any), a cheap anti-virus suite, and a very careless attitude towards security (visiting dubious websites, installing anything, hitting "agree" to anything that pops up, etc..), why on Earth does that person need SecureBoot? His computer has about a million wide-open vulnerabilities that will be much easier to exploit than the security holes that SecureBoot tries to seal.

I think that, from the very beginning, SecureBoot was all about Microsoft seeing the gloomy prospect of many more people and organizations move to alternative OSes (which is something that can happen very quickly, as soon as it reaches a critical mass), and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The following site has a pretty comprehensive list of supported printers / scanners for Linux. I would say that as you buy your printer, you can just look up the model on that site before to make sure it's supported.

Overall, I would say that the worst that could happen is that some of the advanced features of the printer / scanner might not be supported. But it's very rare that it doesn't work at all, especially if it's a network printer / scanner.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Like it was said by others, the solution is to add an include of <cstdlib> where the rand / srand and system functions are found.

I would just add that the reason this might have worked on another / older compiler is because sometimes standard library headers include other ones. It is very much possible that in a previous version of GCC, the <ctime> or <iostream> header included the <cstdlib> header. Strictly-speaking, none of the standard headers are required to include any other standard header, and so, if you have code that implicitly assumes that one standard header will include another is potentially going to be invalid on another compiler / platform.

In the old days, there used to be a lot more inter-inclusion of standard headers (especially because early implementations of C++ standard libraries were implemented in terms of the C standard libraries). Over the years, the standard library implementations have gotten more "lean", after people complained repeatedly about standard headers using this "including-the-entire-world" style (which creates code pollution and long compilation times, which people were very frustrated about).

If you were used to writing C++ code with an older compiler, then you might be acustomed to this older behaviour, and it will take some time getting used to to update your coding habits.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you read the documentation for the istream operator, then you would learn that you need to first call the "width()" function on the input stream to tell it how many characters you can accomodate with the read operation. Otherwise, the width will be zero and the stream will go into a fail mode when reading from it.

So, instead of this:

 fin >> comets;
 fin >> groupers;

It should be this:

 fin.width(6);
 fin >> comets;
 fin.width(6);
 fin >> groupers;

Note that the width() function has to be called again for every operation like this.

For a simpler approach, that doesn't require these tedious calls to "width()", just use the C++ string class instead.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

:-1: error: cannot find -lGL

This error means that you don't have OpenGL installed on your computer. You should install the libGL packages (under Ubuntu, they are libgl1-mesa-dev, libglu1-mesa-dev and libglew-dev).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, like I said earlier in this thread, I consider that part of the definition of "powerful" is the width of the application domain.

I don't think that my latest rant contradicts the concept of using domain-specific languages or tools. For me, a good domain-specific language is one that fulfills those two conditions, that is, is convenient to use and can get you pretty close to an optimal solution, for problems in its target domain.

In my latest rant, I was talking more about general-purpose languages, or at least, languages that would like to claim to be general-purpose (like Java and C#, and to a lesser extent, Python).

Admitedly, my bias is towards arguing that C++ is the most powerful general-purpose programming language today. If you want to say that other so-called general-purpose languages are actually domain-specific languages, or if you would agree with my last argument that as general-purpose languages they are inadequate, then either way I'm happy.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Also, anything I can think up, I can usually do [in .NET].

Then I would say that your imagination or worldly experience in programming is seriously limited. Off the bat, it's important to realize that on a purely abstract level, all Turing complete languages can do anything. But on a practical level, there are two additional concerns: (1) how easily can I create a working solution, and (2) how good of a solution can I create. For me, a powerful language is one that scores very high on both fronts (like C++), and they are not mutually exclusive (contrary to some popular beliefs, especially within Java/C#/Python echo chambers).

High-level managed or interpreted languages aim to score high on concern (1) through a simplified programming paradigm (e.g., pure OOP), lots of run-time instrumentation of code and data (e.g., universal base-class, run-time reflection, duck typing, etc.), many layers of indirections, and very conservative (and safe) memory and threading models. The result of that is an insurmountable upper-limit on concern (2) (or "expressive power"), which actually makes some major disciplines of software engineering entirely pointless in those languages, not because they are not needed, but because they are not doable, due to techniques used that cannot be expressed in those languages. This typically explains why some people's imagination doesn't extend very far, because as a Java or .NET developer, these fields of software engineering would simply be outside your observable universe.

I cannot count the number of times I have encountered …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

is there possbility Qt will somehow release the password without receiving proper trigger?

No. If you do your job correctly, passwords should never appear as plain text anywhere except at the point where the user enters it. Basically, you have an edit box where the password is entered. That edit box should take the password, immediately get a hash from it and then scramble the memory of the edit box. Then, to validate the password, you only check that the salted hash code matches the salted hash you have stored somewhere.

When it comes to hacking desktop applications, the technniques used mostly involve different forms of code injection. In other words, it's not so much about getting a password revealed, but rather about getting some piece of code executed with a high enough privilege level (e.g., root, admin, etc.). There are basically three forms of that.

First, you can spoof a dynamic library, such as DLL injection or shims, to which Qt is somewhat vulnerable too, like any other widely used library. The way to protect oneself from this is to control the distribution of the library, something that Linux and Mac does very well through secure and verified software repositories, while Windows has always struggled enormously with this problem.

Second, you can overwrite the code of a loaded program to do things like replace it with your own code, remove it (like removing a password / license check), or pretty much anything you …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What would the ivalArray() indicate?

It indicates that when a Banana object is created, the ivalArray data member will be value-initialized. This is standard terminology and it boils down to meaning that it gets initialized to a default value. For all primitive types (int, double, any pointer type, etc..), the default value is zero. And this applies to arrays too, i.e., value-initializing an array will value-initialize all of its elements.

One important thing to know is that if you don't put this statement, ivalArray(), in the constructor's initialization list (which is the technical term for that place between the : and the {), then the array will be left uninitialized, as in this:

Banana::Banana() : 
    ival(0), ivalArray()
{
  // at this point, ivalArray contains all 0
}

// But...

Banana::Banana() : 
    ival(0)
{
  // at this point, ivalArray contains GARBAGE!
}

Would it be the same as initializing every element to null/empty?

Yes. For an array of primitive types (or more in standard terminology, trivial types), this will initialize all elements of the array to zero / null. In case you have an array of objects of some non-trivial class type, then this will call the default constructor on all elements of the array, i.e., they are all default-constructed.

is there a difference between initializing after a colon and before the {} and initializing inside the {}.

Yes, there is (ignoring the possibility that the compiler will optimize things out). Basically, just …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The tolower and toupper functions only work on a single character at a time. So, if you want to apply it to a whole string, then you need to iterate through the characters in the string and apply that function to each character.

For example, if you have a string, you can do this:

char a[] = "Something";
int a_n = strlen(a);
for(int i = 0; i < a_n; ++i)
  a[i] = char(tolower(a[i]));

This will turn each character of the a string into a lower-case one. You should be able to figure out how to apply that to your particular problem.

Generally-speaking though, it is easier to use the C++ class std::string instead of these old C-style strings (char*).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I am very hopeful with the trend of many "professional" commercial software being ported to Linux and Mac. And I am also very happy that my field, robotics, has really embrassed Linux. As I am currently evaluating my options for employment, I see that the vast majority of people doing robotics are using a Linux-based stack of software and libraries. The traditional engineering side of the business is still inclined to use Windows, mostly due to the professional engineering software that they work with, but I'm very hopeful for the future of Linux in those technical fields. For example, stepping into a space mission operations control center and seeing that all computers are running Linux makes me very happy.

What I know I miss in the Linux world is having that grounding in command line tools, and editors like VI and Emacs, still learning Bash scripting.

I don't think that's a major problem. These things can be learned as you go. I've never set out to learn Bash and other fundamental Linux tools. I just learned to become proficient with them over time, from one simple task to another, learning bit by bit. For instance, after many years using Linux, I just learned a few months ago about how awesome "named pipes" are when running lots of scripts... I mean, being able to chain multiple programs (that were never meant to be chained) to run in parallel while reading / writing each others inputs and outputs without generating …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think the real question here is: Why are you connecting to the internet with a Win XP computer? Do you like playing with fire?

XP is no longer supported by Microsoft and it no longer gets any security updates or patches. I would not recommend that you continue to use it to connect to the internet. For an older computer, you could replace it with a lightweight Linux distribution, like Lubuntu, which will be new, fast and secure, and free!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

No. I would not choose to use a non unix-like system. I even consider it when looking for a job, that if they seem to be using Windows, I have to take into account the fact that this job would imply a heavy daily dose of OS-related frustrations that I wouldn't have to suffer on a different job where they use a unix-like system.

The main reason that there is an operating system, Windows, that isn't based on posix is nothing more than a historical and rather unfortunate accident. One might argue that it is a part of Microsoft's business strategy now (prevent Windows-only commercial software from being easily offered in unix-like systems (Mac, Linux, etc.)). I think that when Microsoft started out with DOS, it wasn't yet clear that Unix-like systems would end up becoming the de facto standard system architecture for nearly all operating systems from the late 80s onwards. This explains why it was reasonable in those days to using a completely original architecture for DOS instead of trying to follow the lead of unix / posix architectures. And when Microsoft captured such a large part of the consumer market, it became an advantage to be "special" in this way. And now, the only way to make Windows usable is to install Cygwin (and Unix-like compatibility layer running on top of Windows).

Posix is undoubtably the best architecture. Also, Steve Jobs' decision to adopt a unix-like / posix operating system (OS X, based on NeXT) was definitely …

JasonHippy commented: Right on the money, as ever! +9
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Qt is free. It's true that they have some professional tools that you have to pay for. But you don't really need those tools to work with Qt (in fact, I wouldn't recommend them). Technically, all you need is to install the libraries and have some C++ compiler of your choice. To create the GUIs, it's easiest if you use one of Qt's free tools, like Qt Creator or Qt Developer. Just look at the "Community" download page, which refers to the open-source (and free) side of Qt. Most Linux distributions have packages for Qt libraries and most of its free development tools, so, you should look there first.

Also, Qt is dual-licensed. What this means is that you can use Qt for free as long as you don't modify any of it (only use it) or don't redistribute it (those are the terms of the LGPL license). If you want to modify or redistribute it, then you can pay a licensing fee to Qt for doing that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How long did you try the glasses you had had before?

I remember that my first glasses also gave quite a ride, in terms of headaches or feeling out of balance. That got progressively better over the course of probably one month or so before it went away entirely. I think that these weird effects are neither due to bad prescription nor the glasses themselves (as opposed to having contacts or a laser surgery), I think that it is just a natural consequence of your brain slowly readjusting. Remember, your brain has spent the better part of your life-time getting used to analysing the blurry images that come from your eyes. This whole system will necessarily be thrown out of wack for at least a little while when all of a sudden it gets crisp, clear images.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@iamthwee
It all depends on what you want to do. If your interests are in the design of the game, its look and feel, its user experience, and gameplay or storyline, then you should definitely be spending most of your time with the tools of those trades (like Maya / Blender, and off-the-shelf game engines).

But if your interests are in the programming of the various core algorithms in games, like 3D graphics, physics, and A.I., then you need to spend your time doing that. And RikTelner's question seemed to specifically ask about doing the 3D graphics himself.

I've gone through this. I got really into doing 3D graphics for quite some time (when I was much younger). And it's true, as you said, I could spend months and months on projects that never really came close to looking like an actual game. But I learn a lot about that domain, about coding, and about myself and what I loved to do. And you ask "Why" do this? Because you enjoy it. If you don't enjoy that, don't do it. It's that simple.

One time, I said to myself that I should for once make a complete game. So, I did as you describe. I used off-the-shelf stuff and aimed for a simple game concept (2D shooter game). It was so insanely boring to me, just a lot of very trivial work and nothing that presented an interesting challenge. I know that some people love to design games and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Windows is programmed in C and C++. Its native APIs (win32) are in C. Most professional Windows software is written in C++ (like Visual Studio, MS Office, Adobe products, etc..). I don't understand what you mean by saying that you need C# to write native applications for Windows. C# is the primary language for writing native .NET applications, i.e., it targets .NET, not Windows. If there is any meaning to the term "native application", .NET applications are anything but.

One thing that is true though is that Windows offers a much more limited practical set of options when it comes to writing applications. In Windows, you can either use C, C++, C# or Java, or one of two dozen dead / failed languages that Microsoft has tried to promote over the years. I guess this has the advantage of being easier to choose, since there is less of a choice. Is that what you mean?

On any platform, writing GUI applications that have a native "feel" to them is mostly a matter of using the right GUI library / toolbox. Most platforms have a GUI library that is most native, and it is often the library used by the OS's graphic elements too (e.g., its start menus and configuration panels). On Windows, that used to be Win32 API in the early days, then it became MFC for a long time, and now it's WinForms. In Linux, things a bit more diverse because it depends mostly on the desktop environment used, …

Slavi commented: Another one of those, #mustreadbymike2k +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As mentioned by scott meyer in effective c++:55 ways to design "we should use private inheritance only if one of the class we want to implement in terms of other class and there is no direct relationship between these two classes and still you want to inherit public/protected virtual function in your class".

I was intrigued by this quote and read that section of Scott Meyers' book. I think you might be taking that "advice" a bit further than it is meant to be taken. And I also have a couple of reservations about it.

First off, the two cases that he describes in that section (which are what I would call "intrusive composition" and EBCO (empty base-class optimization)) are essentially the "advanced" techniques I was referring to in my earlier post (when I said "[private inheritance] should only very rarely be used, and those uses are kind of "advanced" techniques"). So, just understand that his arguments are true in themselves, but they have very small application areas (they are very rarely used). In fact, in modern C++, the EBCO technique, which he describes as an "edge case", is actually far more common today than the other technique he describes. And if you are interested, the reason for that shift in modern times (remember, Effective C++ is quite an old book) is that it's a part of a general tendency to move away from traditional monolithic classes towards much smaller classes (and thus, often empty).

Secondly, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

class pd is privately derived from AbstractClass as both are different classes and there is no relation between them.

If they are completely unrelated classes, why do you use any kind of inheritance? You should not inherit if there is no purpose to it. In fact, private/protected inheritance is a very peculiar feature that should only very rarely be used, and those uses are kind of "advanced" techniques. Such inheritance is almost never justified in typical C++ code.

And in your case, there seems to be absolutely no purpose for the private inheritance from AbstractClass by the pd class.

COuld you suggest if without need of servicecall function same can be achieved.

No, because that would defeat the point of how the AbstractClass is implemented. The whole point here is that every call to do_servicecall() must go through the non-virtual public member function servicecall() from the AbstractClass. This is a good thing in general because it allows you to encapsulate any implementation details with respect to the relationship between AbstractClass and its derived classes (e.g., shared class invariants, virtual functions, etc.) without affecting the users. The pd class, despite the misleading use of inheritance with AbstractClass, is a class that has no meaningful relationship with AbstractClass nor with any of its derived classes, and therefore, the fact that it is restricted to using AbstractClass' public interface is a good thing.

do you think that it is perfect design to implement the same.

I think …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We won't do your homework for you. If you are trying to solve one of these problems and you are encountering problems doing so, then you should post what you have and tell us what specific things you are having trouble with.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you are planning to do some networking work with C++, I recommend that you look at Boost.Asio instead of using a crude platform-specific interface like winsock. Despite a lot of its annoying aspects, the Boost.Asio library remains the best way to do this kind of thing. I especially recommend setting things up as a single-threaded asynchronous server, that's the "optimal" way to do things with Boost.Asio, and it minimizes multi-threading issues (which are some of the worst headache-inducing problems with this kind of work).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Like Slavi said, this address range is reserved for local network addresses (e.g., for everything plugged to your router).

The usual reason for typing such an address in a web-browser is to access your router's configuration page (a small web-page server that your router runs to provide you with a GUI to configure it). Different routers use different addresses by default, it mostly depends on the company who made the router. Usually, they use an address in the low ranges. Common ones are 192.168.0.1, 192.168.0.2, 192.168.1.1, 192.168.1.2, and so on.. If you don't want to just go by trial and error (which will not take long), you can just lookup what the default address is for your brand of router, or, do as Slavi suggests, which is to find out what your default gateway address is (which is usually the router's address), through ipconfig / ifconfig or some "properties" panel of your connection (e.g., if you are using Windows).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The CMakeLists.txt file is where you create the build script for your project. Unless you are doing something fancy, this file should be the only thing you need for setting up a build with CMake. You can look for cmake tutorials for information on how to write that.

The "Config" files is too vague to really know what it means. A configuration file is a very general term. You need to figure out what that file is for. Just ask whoever told you to create a Config file to tell you what that file is for. One possibility is that it refers to a configure script for the "autoconf" build tool (in my opinion, just use CMake and forget about autoconf, it's an antiquated system that is very limited and very annoying to use).

For the "spec" file, I'm not exactly sure what that is. I have seen files with the .spec extension before in projects, and the contents of it seem to be related to RPM packaging (it contains package name / author / etc., description, build command and install command). You can see these instructions. I don't really have too much experience in packaging, so I can't really help you.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you want to do 3D graphics on your own (without pre-made higher-level libraries), then your starting point should be to learn to use C++ and OpenGL. The NeHe gamedev tutorials are a classic starting point for novices, although it is a bit out-dated now, but still very good for learning the basics, just look for the "Legacy Tutorials", like Lessons 1-5. NeHe tutorials are notorious for being so well written and easy to follow. Here you can also find more up-to-date tutorials (OpenGL 3.3) that are a bit more advanced (but still start from the basics).

When writing 3D graphics code, C++ is inevitable and totally dominant. It is only at the higher-level (game logic) that languages like C#, Java, and Python start to appear or become relevant. Custom scripting languages or interpreted languages like Python tend to appear more in game logic code. And managed languages like C# and Java tend to appear more in in-house support tools development (e.g., GUI applications to help game designers make game levels, models, etc..). But nearly all of the code that really matters in actual 3D games is written in C/C++.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't think anyone really does 3d graphics in C#, or at least, I don't think anyone should. As far as I know, Unity is written in C and C++. It's only the high-level bindings that are in C#.

C# being a Microsoft-only language (in practice), I would imagine that the only way to write 3D graphics in C# would be by using Direct3D. Here are some tutorials on that. But again, I think it's a waste of time, i.e., coding at that "low" level with a very "high" level language like C# is assinine.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Every bug you find is the last one.

I've been finding my last bug every week for the past 10 years... yep, that's how good I am. I found my last bug ever last week, now my code is flawless. ;)

I would have to say engineer. I never knew one who didn't write code like that.

Hey, I'm an engineer. Are you saying you don't know me RJ? Or are you saying my code is shit?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So, from the link you gave, there are three files in there, and they are short enough that I'll simply post them here:

CMakeLists.txt:

cmake_minimum_required (VERSION 2.6)
project (Tutorial)

# The version number.
set (Tutorial_VERSION_MAJOR 1)
set (Tutorial_VERSION_MINOR 0)

# configure a header file to pass some of the CMake settings
# to the source code
configure_file (
  "${PROJECT_SOURCE_DIR}/TutorialConfig.h.in"
  "${PROJECT_BINARY_DIR}/TutorialConfig.h"
  )

# add the binary tree to the search path for include files
# so that we will find TutorialConfig.h
include_directories("${PROJECT_BINARY_DIR}")

# add the executable
add_executable(Tutorial tutorial.cxx)

TutorialConfig.h.in:

// the configured options and settings for Tutorial
#define Tutorial_VERSION_MAJOR @Tutorial_VERSION_MAJOR@
#define Tutorial_VERSION_MINOR @Tutorial_VERSION_MINOR@

tutorial.cxx: (should be tutorial.c, because this is C, not C++)

// A simple program that computes the square root of a number
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "TutorialConfig.h"

int main (int argc, char *argv[])
{
  if (argc < 2)
    {
    fprintf(stdout,"%s Version %d.%d\n",
            argv[0],
            Tutorial_VERSION_MAJOR,
            Tutorial_VERSION_MINOR);
    fprintf(stdout,"Usage: %s number\n",argv[0]);
    return 1;
    }
  double inputValue = atof(argv[1]);
  double outputValue = sqrt(inputValue);
  fprintf(stdout,"The square root of %g is %g\n",
          inputValue, outputValue);
  return 0;
}

The most important line that needs explanation is the following line from the CMakeLists.txt file:

configure_file (
  "${PROJECT_SOURCE_DIR}/TutorialConfig.h.in"
  "${PROJECT_BINARY_DIR}/TutorialConfig.h"
  )

The configure_file command in CMake will take some input file (in this case, "TutorialConfig.h.in") and generate an output file (in this case, "TutorialConfig.h") which will be the same as the input file but every occurrence of @Something@ (where "Something" can be anything) will be replaced by …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Unfortunately, like in the real world, we cannot censor or ban people just because they have a thick skull. ;)

But if a user has a habit of high-jacking other people's threads to ask his own repetitive questions, then that would be something we could warn or even infract about. That would fall under the "keep it organized" rule.

We can't really do anything about someone who just doesn't get it, or makes no effort to understand or learn. The way I deal with this is that if I feel that I've given a sufficient answer and the OP is being thick or ungrateful, I just stop responding.

cereal commented: +1 +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Writing modern C++ parallel code is very cutting edge, and by that, I mean very recent / experimental. I think that the main existing modern C++ libraries that enable parallelism in a way comparable to what SYCL hopes to do is Microsoft's C++ AMP and Intel's TBB. Things like Cilk and OpenMP are the next contenders, but are definitely more "C-style", just like OpenCL and CUDA.

The problem is that you seem to be looking for some example of "established modern parallel C++ practices", which doesn't really exist yet. There are modern C++ practices, there are some established parallel C-style practices, but I don't think there is much that combines both. And if there are, I'm not aware of them (and to be honest, I'm not the right person to ask about it either, my acquaitance with this field is very limited).

I think your best bet is to look for code that uses either C++ AMP or TBB. I think that those programs should map fairly well to SYCL (from what I understand SYCL to be). There were many interesting talks on parallelism in C++Con 2014, you might also want to look at those.

Just some thoughts about what you mentioned:

I'm particularly interested in code which uses:
lambda functions and variadic templates,

Lambdas and variadic templates are nice modern C++ "front-end" features that I could very well see being used well in parallel C++. I would imagine that lambdas are a particularly nice way to …