mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There's no tail recursion in that function. For tail recursion to apply, you generally need to have nothing appearing after the recursive call (the call to "back()"). In this case, a character is printed out after the recursive call, which means that the character value must be retained through the recursive call, which breaks tail recursion (or makes tail-call optimization impossible). If someone claimed that this code uses tail recursion, then that person is wrong.

The easiest way to tell if something can be tail-call optimized is to see if you can manually translate the function into an iterative one (instead of recursive) and see if you can do it without auxiliary memory. In this case, there is no way to do that iteratively without having O(N) memory. For example, one way to do this iteratively is to store all the characters into an array and then traverse that array in reverse to print out the characters, which implies that you need an array of N characters to make this work. The recursive form you have there does the same thing, except that the characters are stored in function-call stack-frames instead of in a simple array, which is just inefficient and bad style (and unsafe, by the way).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why on earth would the pointer symbol * be pushed back and be seperated from it's pointer name with the increment operator in the middle?

The * is not part of the pointer name, it's part of the type when you declare the pointer, like int* p; which means that p is a pointer to an int. But when you later do *p, the * symbol is a dereference operator, which means that *p fetches the value stored at the address pointed to by p.

So, to understand the expression *++p, you have to read it inside out. The ++p increments the pointer p (that means, it moves its address one increment forward, the number of bytes that this increment represents depends on the size of the thing that the pointer points to, like if p is a int*, then the increment is of sizeof(int)) and then it returns the final value of p after the increment. In other words, the expression returns the address that is one increment after the original address that p pointed to. And finally, the * dereferences that address, as if you did *(++p) or p += 1; *p.

I've recently read that C++ 11 makes dealing with pointers a lot easier, if that's the case does the current version of g++ support that aspect of the new C++ 11 standard?

Well, you are always going to have to understand pointers. But C++11 provides library components that help you manage the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I've never looked into this language before you mentioned it. It has some interesting elements. I'm glad you got me to look into it.

First, I love the built-in concurrency support in Go. It has Ken Thompson's finger-prints all over it. It's basically structured like Unix pipes, which are awesome. This whole way of doing concurrency is something that I have grown to love. I wish there was more support for this in C++, the C++11/14 mechanisms like future/promise and async go a long way, but are not quite there yet, but there is more juicy stuff to come in C++17, so I've heard.

Second, the ditching of inheritance in favor of "interfaces" (as it's called in Go) is another cool decision. I use this pattern all the time in C++, and I much prefer it to the traditional OOP patterns (inheritance), I even wrote a tutorial about one form of it. It's part of a trend, mostly born out of modern C++ practices, that is about realizing that dynamic polymorphism is great but inheritance is just the wrong way to do it. Inheritance is still useful for boilerplate code in generic programming (and a few other techniques), but since Go doesn't support generic programming, I guess it doesn't matter. But I love the way Go makes the creation of such "interfaces" so seamless and clean. C++ is about to have a feature called "concepts" that I believe could be used for a similar purpose, I'm seriously considering …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This video is also a great explanation of this issue, mainly in the form of a how not to do it and why, followed by the proper solution used today, which is hashing + salting.

The basic idea is really that you don't need to store a password in plain text, or be able to retrieve its plain text representation, because all you need is to be able to validate the password given (when logging in). So, you store the password in some "encrypted" way (actually, with a salted hash) and you just compare it (to validate it) using that encrypted form.

For instance, this is the reason why when you've lost your password (can't remember it), you cannot get that password back, all you can do is get a new password generated for you or some temporary link to reset the password. There are still places that store passwords in plain text (or in a way that the plain text passwords can be retrieved), but they shouldn't do it, and if you realize that any important site or service uses that method, you should avoid having an account with them, unless that account is "harmless" (e.g., like a mailing-list subscription, or something like that, which doesn't store any sensitive information). And obviously, if you have to be subscribed to a service that stores plain text user passwords, then make sure you don't use the same password(s) as for your more sensitive accounts (email, paypal, computer login, etc..).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Analog computers already exist, and they are pretty great. AFAIK, they have fallen out of favor due to modern computers being so powerful now. In the older days though, analog computers were a great way to perform complex calculations. They used to use them a lot for things like control systems for automation equipment and stuff like that. One of the big advantages they have is that not only can you compute complex equations with real numbers (actually, complex numbers too) directly without having to rely on a digital representation of them, but they can also simulate dynamic systems in real time. For example, if you had a mechanical systems (like a mechanism or robot), you can simulate its dynamic behavior in real-time using an electrical circuit that replicates it (most mechanical things have an electrical analog, with the movement of electric charge playing the role of the movement of objects and the voltage playing the role of the forces on those objects). Also, things like control systems and signal processing filters are often formulated as dynamic systems that can be realized with an analog electric circuit. So, back in the days when digital computers were just impractical, things like early industrial robots or other similar automated machinery used analog computers (specially designed electric circuits) for the signal processing, control systems and even kinematics and dynamics calculations (for things like model-based control and closed-loop inverse kinematics).

Today, analog signal processing is still a very important first step to any complete …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you look at the practical example section, where they show you how to visualize the garbace collector using that VisualGC tool, you can clearly see that the generational GC is what is being used in practice.

The point really is that the "mark-and-sweep" process is the fundamental mechanism by which garbage collection is done. And the mark and sweep is still the way that the generational GC works, it's just that the generational approach is an improvement over a basic mark-and-sweep by creating a distinction between short-lived and long-lasting objects. It is merely taking advantage of the fact that when you are creating and throwing away large quantities of small objects quickly, then it is better to do mark-and-sweep passes much more frequently (to avoid letting the memory usage grow too much or be too full of garbage before collecting it). And conversely, when you have long-lasting objects (e.g., singletons, "main window" object), you can probably just let them sit for a longer time without checking for garbage. So, all that the generational approach does is to split the memory into a section of "young" memory that is marked-and-swept very regularly, and a section of "old" memory that is marked-and-swept far less frequently, with some additional logic to be able to promote things from young to old.

Probably some older JVM versions used a vanilla mark-and-sweep with no generational segmentation. But I think that modern versions are more refined than that. But again, generational GC is not an alternative

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would echo the same sentiment as others have, no one could question the "morality" of choosing to participate (voluntarily) more to one forum or another, or none.

I also visit SO regularly and post on occasion. But I feel that I wouldn't be able to keep my sanity if I spent too much time there. There are way too many "Wheaton's law infractors" over there (for lack of using a more explicit word that wouldn't pass the bad-language filter). I've been scolded too many times over there for being too "daniwebby" with my answers, i.e., being helpful, opinionated, nuanced and original. I think they very much prefer answers that are short, black-or-white, peddle preconceived notions and play in their echo chamber, and they are really quick with the "copy-google-click-copy-paste-post" automatism that just produces uninteresting junk answers. Some of the highest profile members on SO are really living in a bubble, and I sometimes fear for their mental health. Especially since my interest is C++, some of the guys over there are just fanatical about standard guarantees and stuff like that, and they seem to have lost all notions of real world programming (e.g., I've even had someone prominent tell me that it is undefined behavior to pass an integer to a C function from another C library, which is technically true but a completely ridiculous notion in real world programming, this is the kind of la-la-land stuff you constantly have to put up with on SO).

But if this …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah. I remember now, that was the extra step I had to do to claim the cash-out reward, because I was in that situation where my daniweb account had a different (old) email address that wasn't the same as the one used for my paypal account.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There is definitely another step involved... I was also confused by this when I cashed out about 6 months ago. The problem is that I don't quite remember what that step was. I think it was about getting into your PayPal account and approving the incoming transaction... or something like that. Dani will probably come around to confirm.

Slavi commented: Thanks, looking forward to see if she replies =) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't really know why you'd want to install Android on a laptop, but I guess it's possible (with android-x86). I would recommend a proper desktop version of Linux instead, like some variant of Ubuntu for example.

Generally, to make place for a Linux installation, you would go to Windows and shrink the partition(s) so as to create 100GB of free space (leave it unpartitioned). Then, during the Linux installation (booting from a LiveCD / LiveUSB drive), you will reach a point (one of the first things to set) where you can specify where to install it, and at that point, you can either select to manually specify the partitions ("advanced") or you can set it to "use the free-space" (or something similar) which is going to partition your free-space in some default way that should work just fine.

For example, for Ubuntu, you can follow these detailed instructions, which are written for Windows 8 but are just the same for Windows 7. If you don't want to use a journaling file-system (the btrfs mentioned in that tutorial), you should select EXT4 instead of btrfs in the partitioning menu when selecting the format for the / mount-point.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I love those benchmarks, they were very well done. I hope someone could redo this work on more up-to-date platform / compilers / etc... I was a bit suprised by the lack of competitiveness of Fortran (compiled with Intel) compared to C (with gcc). It would have been nice if they had also included benchmarks of C compiled with ICC. The compiler really matters a lot. Take for example Pascal which ranks really bad, even though the language itself permits pretty much all the same optimizations as C, but it is orders of magnitude slower because there hasn't been a good new Pascal compiler for decades.

As for Java, one thing that is apparent is that the memory overhead is huge. I mean, you constantly hear people complain about the memory overhead of C++ compared to C, which is about 1KB. It appears that the memory overhead of Java is anywhere between 25MB and 1GB of RAM. Python is also suprisingly high on that. I guess that the rule of the benchmark's implementations were to write the code purely in the language, not relying on bindings to C/C++ library code. That's where Python shines, i.e., the seamless integration with C/C++ library code, which makes most Python code just very thin (and easy / convenient) high-level code on top of a C/C++ infrastructure of library code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I share that feeling that Java is on its way down. It is starting to feel more and more like a legacy language. I think that much of it has to do with promises that never materialized.

The promise of "portability through the JVM" is seriously undermined by a couple of factors. First, it was promised that eventually, any computing device would be powerful enough to accommodate it, but small resource-deprived and energy-efficient devices have proliferated, not diminished, with the advent of the "smart-everything" world. Second, the rapid and reckless evolution of the versions of Java has also caused a lot of portability hell. So, the end result is that in the "uber-portable" world of Java, you have to constantly worry about the JVM versions and subsets of your target platforms. By contrast, with native applications (e.g., C++), if libraries that you need are not already installed on a system, you just pull them in, which often can't be done with JVMs.

The promise of "no leaks via garbage collection" is also a failed experiment. Garbage collectors leak memory, that's just an irremediable fact, and diagnosing the root causes of those leaks is nearly impossible. The only practical solution for long-running Java applications (e.g., servers) is to periodically shutdown and restart the JVM. And there has been virtually no progress in this domain. But in the mean time, the competition has gotten so much better. On the side of native code (e.g., C++), coding practices, memory debugging tools, and code …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with prit, it's pretty vomit-inducing.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Wow.. there's been a lot of activity here since I last checked...

@Mike The original song is "Play with fire" by The Rolling Stones

I know, but I like Kandle's version better.

if you read the comments on most of them, the downvote is because of the ignorance.

I've learned that it's better to lead people to discover their own ignorance than trying to mock it or reprimand it.

APott is the one who married D

I'm just gonna say... 50% of marriages end in divorce.. ;)

there's only 1 language that's more powerful than C, GLSL

Really?? GLSL is only for vertex and fragment shaders in OpenGL, and it is a subset of C. I would hardly consider that more powerful.

I think that what you meant to say was "GPGPU" (General Purpose computing on GPUs) with things like CUDA, OpenCL, C++ AMP, OpenACC, etc.. These are essentially extensions (and some restrictions) to C and/or C++ to be able to compile the programs to run on GPUs or a blend of CPU/GPU. What is most awesome here is the parallelism you get (if you do it correctly, which is tricky). OpenMP and Intel TBB are also great tools.

And one important thing to understand here is that these extensions or libraries are part of the C/C++ language(s), in the sense that they are part of the argument about how powerful C/C++ is. In other words, you can write normal C++ code, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you want a little more ease of abstracted code, go for D,
it's alot cleaner than C++ and compiles nicer, as my friends report.
(I personally like to call C++ the "Java" of lowest level languages) :P

I must object... I'm guessing you were betting on that (from seeing that ":P"). But don't play with me, cause you're playing with fire.

First of all, I think that the foul language is uncalled for. By foul language, I mean, of course, the word "Java". C++ does not deserve to be insulted and befouled like that.

Second, the D language shares far more similarities to Java than C++ does. In many ways, D is a kind of "Java'ified C++". D has modules, interfaces, garbage collection, finally blocks (aka the modern-day "goto") and no preprocessor. I used to love the ideas in the D language, when it first started, but I've been hating it more and more since they threw in all these stupid Java'isms.

The fact that the D language advocates call it a system language is laughable. If you are going to write system code in D, you are going to be relying on a heck of a lot of C code, to the point that you might as well just ditch the D code altogether.

And if you find that C++ is not clean and doesn't compile nice (whatever that means), then you're doing it wrong, especially since C++11/14.

if you want …

Tcll commented: nice :) +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Can you clarify what you are asking? I really don't understand this question at all.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You have a superfluous * character on line 3. This should work:

bool operator==(const image &other) const
{
    return (other.btBitmap == this->btBitmap)
}
bool operator!=(const image &other) const
{
    return !(*this == other);
}
cambalinho commented: thanks for all +3
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Volatile allows a global or shared variable to be read and written attomically.

That's not true (maybe you are confused with Java's volatiles? For Java 5 and up, "volatile" is equivalent to C++11's std::atomic). It's a common misconception (that I must admit, I used to have too). Volatile variables only mean that the compiler cannot optimize away the read / write accesses to the variable (e.g., by re-using the last cached value either in L1 cache or in registers). They used to be considered as poor-men's atomic variables because of two assumptions: (1) the program runs on a single core machine and (2) read / write operations on primitive values (bool, int, etc.) are just a single instruction. Under those conditions, a volatile primitive variable is essentially like an atomic variable because there is no need for any additional precautions except for making sure that the read/write operations are indivisible (a single instruction is indivisible) and not optimized away.

Nowadays, assumption (1) is definitely out because you can rarely find a single-core computer anywhere these days. And assumption (2) is complicated by the existence of multi-level caches and instruction pipelines in modern CPUs. So, volatile variables have sort of lost their appeal as poor-men's atomics (but not completely), because they simply don't work anymore. For instance, I onced used a volatile variable in place of an atomic in a real application, and saw sporatic failures (roughly, 1 in a million duty cycles (writes) of the variable) that completely …

rubberman commented: I was thinking pre-C++11. :-) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Unix in Neanderthal and Linux/iOS have evolved further and are better (in my opinion).

If Unix is a primitive human (e.g., neanderthal or homo erectus), and Linux is a modern human (homo sapien), then Windows must be a platypus. ;)

RikTelner commented: *shots fired* +2
rubberman commented: I vote for wooly mammoth! :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

VMs provide a pretty low-level sandbox, which means that there aren't many ways to "break out" of the VM, because within the VM there isn't much that you could see from the host. With a basic setup, there is, as far as I know, nothing that transpires between them. However, there are a number of additional features for things like being able to access the host's file-system (folders) through the VM, which might be convenient sometimes, which could allow for an infection to spread, but it would generally require that the virus in question be designed to be able to do that, which is unlikely. And, of course, if you avoid using those features, and thus, keep your VM very basic / isolated, then there's no danger with that. In fact, it would be really difficult for a virus to even detect that it is running within a VM, let alone break out of it.

Another possibility is on the networking side of things. The VM more or less acts like any computer on your local network. Assuming that you are protecting your local network with a router-based firewall and port blocking, or even a DMZ, then if anything infects a computer on your local network, those defences are useless against anything coming from that infected computer. However, this technique is almost exclusively used in deliberate targeted attack against a computer or network. This is not something that an ordinary virus would do. And also, there are ways to protect …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

No. Unix pre-dates Linux by a lot. Unix was basically one of the very first operating systems, from the earliest days of computing. Along with a few other early and often related operating systems, most notably BSD, they set the industry standard for what operating systems are and how they work. A large part of that became the formal standard called POSIX. And today, the vast majority of all operating systems in existence follow that standard (or nearly so), that includes Mac OSX, Linux, Solaris, QNX, several BSD derivatives, Android, and a number of specialized operating systems (if they're not using Linux).

In fact, Linus Torvalds explained many times that he basically created Linux because he had worked with and studied Unix systems and thought they were awesome and wanted one for his personal computer, but couldn't afford it (a license for such an early industrial-grade OS is expensive, only companies and institutions could afford it). So, he basically wrote an OS from scratch that could act as a drop-in replacement for Unix, and so it is. By all accounts, Linux is an alternative (and open-source) implementation of Unix. And now, Linux is by far more wide-spread than Unix, making it the lead figure or most visible representative of this Unix family of operating systems.

Today, all of the systems that follow this Unix / POSIX standard are collectively referred to as "Unix-like" systems, because from the perspective of writing applications, programs and scripts, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When you open a file to read from it, then it will not open successfully if the file does not exist. You can check if the file has been opened successfully with the is_open() function, like this:

if( rstudents.is_open() )

which you could use in a loop like this:

while( !rstudents.is_open() ) {
    cout << "FileName = " << endl;
    cin >> FileName;
    rstudents.open(FileName);
};

to which you could also add an error message to tell the user that the file (probably) didn't exist (I say 'probably' because there could be another reasons why a file cannot be opened, but those are rare).

Also, it is generally a good idea to check that a stream is in a good state, which you can simply do with something like if( !rstudents ) which will check if the rstudents stream is in a bad state (not good), such as having reached the end of the file or having failed to do the last read-operation, for whatever reason.

Also, the C++ standard header for the C stdlib.h header is cstdlib, as in #include <cstdlib>.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Asking how to generate 64bit executables using TurboC is like asking how to send Instagrams with a rotary phone.

Just download CodeBlocks or Visual Studio Express, both are free.

Slavi commented: lol'd +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Mac's are definitely very common among computer science or computer engineering students. I'm not sure they're really the best.

I would recommend a PC laptop on which you install Linux for a dual boot with Windows. Windows is better than Mac for using the engineering software you might need for your courses and projects. And Linux is a much better development environment (for programming tasks) than Mac, although Mac isn't bad either, anything is better than Windows in that department. And at the end of the day, a PC laptop will be a lot cheaper than an equivalent Mac laptop, and I assume that for you, as a student, money is a factor.

I even know people in this field who bought a Mac, thinking it would be appropriate for their work, and ended up replacing Mac OSX with Windows, and installing Linux on the side (dual boot).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Besides a few crazy little tricks that rely on goto's, there are two canonical use-cases for goto's. The goto's are not absolutely necessary in those cases (you could do without them), but they are not too harmful and make things somewhat simpler. That said, the gag reflex of most programmers to seeing a goto, or worse, writing one, is very well justified in general.

CASE 1
The first canonical case is the "break out of multiple loops" problem. This boils down to wanting to do a break that drops you out of more than one nested loop. Like this:

void foo() {
  // ...

  for(int i = 0; i < N; ++i) {
    for(int j = 0; j < M; ++j) {
      //..
      if ( something was found )
        goto end_of_loops;
    };
  };
end_of_loops:

  // ...
};

This is not too bad, because it's pretty safe and easy to understand (doesn't make "spaghetti code"). And the alternatives are not particularly nice.

One alternative is to use a flag to relay the break to the outer loop:

void foo() {
  // ...

  for(int i = 0; i < N; ++i) {
    bool should_break = false;
    for(int j = 0; j < M; ++j) {
      //..
      if ( something was found ) {
        should_break = true;
        break;
      };
    };
    if( should_break )
      break;
  };

  // ...
};

Which is a lot of additional code that is just "noise", and just to avoid the infamous "goto".

Another alternative is to …

rubberman commented: Nice writeup Mike +12
ddanbe commented: Thorough explanation, showing deep knowledge. +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This depends on which compiler you are using.

If you use Visual Studio (presumably on a 32bit Windows), then you can install a 64bit cross-compiler (and the necessary auxiliary tools and libraries), as it says here. Of course, if you use a version of Visual Studio that is older than 2008, then you really should update it, because, as far as I'm concerned, any version prior to 2008 is completely unusable (too sub-standard, poor performing, and feature-deprived).

If you are using MinGW (GCC), then you need to use MinGW-w64 which is a fork for mingw that supports both 32bit and 64bit for both host (what you are running on) and target (what you are compiling for).

If you are using any other Windows compiler (Intel? IBM? Borland?), then you would have to check with those vendors what is possible.

Needless to say, if you are not working under Windows (e.g., you are working in Linux or Mac OSX), then this is impossible because these systems use completely different executable formats ("ELF" format, for all Unix-like systems), so, obviously that won't work in Windows. I don't know of any easy way to compile Windows executables from a non-Windows system (i.e., a Unix-like system), I suspect that setting this up is not for the faint of hearts.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are high-level languages that can do low-level things. C++ is the main one.

Part of the problem is that many people label a language as being low-level when it allows low-level things to be done with it (direct memory accesses, memory casts, kernel-space programming, etc..). That is basically a way to exclude any such language from their category of "high-level" languages.

If you define the "high-level" language category in an inclusive manner (as opposed to an exclusive manner) by specifying the kinds of high-level abstractions that the language should provide to be considered as high-level. Then, languages like C++ certainly qualify to that category. There are many other languages in the list of system programming languages that would fit this kind of definition of high-level languages, like D, Go, Ada, and Rust.

And remember, "high-level" and "low-level" are relative terms. C used to be classified as high-level, back when people used assembler or assembler-like languages. And C++ only started to be called "low-level" when some people decreed that pointers were evil, even though C++ supports all the main high-level programming paradigms that exist.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that you overestimate our credulity. Given your current geo-location (which I can trace, but won't disclose), you have a 1,000 mile commute to do everyday to get to the MIT campus. Hmm... I'm starting to think "liar, liar, pants on fire!".

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I know. All of this.
I'm also understand logic VERY well. In fact I'm a Junior ..

But apparently, you don't understand punctuation, sentence structure, proper vs. common nouns, and grammar. ;)

Stuugie commented: lol +6
Slavi commented: +1 :D +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1.Does it runs chromium browser.

Unless it is a very old version of a distro, any Linux distro will run Chromium just fine. Some distros don't like Chrome because of ideological reasons (not FOSS), but you can override that if you really want Chrome instead of Chromium (but they are essentially identical).

What can be a bit more problematic is flash plugins from chromium (to be able to watch videos). I've had a few problems from time to time with that. That's the kind of thing for which it pays off to use the main (non-derivative) main-stream up-to-date distributions (Ubuntu, Fedora, etc..), because when it breaks, it doesn't break for long and solutions are easily found because of the large community of users.

For example, I used to have Elementary-OS, which is a Ubuntu derivative that tried to be very "stable" by sync'ing with relatively old (and "stable") versions of applications (and the kernel). But when I had problems with Chromium and with the flash plugin (from an Adobe update), there was basically no fix because of how old most of Elementary-OS's stuff was. So, I changed to another light-weight distro, Lubuntu, that is actually up-to-date with the official Ubuntu, solving all my problems.

2.Does it runs games like Half life and other popular games.

Before I tackle this, I have to point out that the general rule for applications in Linux is that if it runs on one distro it works on any distro. Linux …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Has Lisp ever really been anything more than an academic language with very little real-life applicability?

I know that some software use Lisp-based syntax for their scripting language, and that there are tons of dialects of Lisp, but that's mainly because it's super easy to parse (i.e., people who like to write interpreters and compilers look for the easiest syntax to parse, so that they won't have to go through too much trouble writing the parser for it).

There are some corner markets (like early AI) that favored Lisp or Lisp-like languages, but they are pretty small and don't really have a mountain of legacy Lisp code to maintain. And as far as I know, these domains have largely moved on from using Lisp a good while back.

Most people that I've heard talk seriously about Lisp were (1) very old and (2) teaching at a computer science department. That's a hint that it's just an old academic language. It probably has some historical importance to computer science and has some cool / smart features (all academic languages do). But to be a "legacy" language, it takes more than just being old or interesting, it needs to still be widely used (even when most would want to see it disappear, often especially those who work with it every day). Languages like Fortran, C, COBOL, Ada, SPARK, Haskell, etc..., qualify (but some are debatable) because, even though they are old and in some sense "obsolete", they are still widely used (in …

Traevel commented: Ah Lisp, the Prolog of the America's. +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

My grandson is a prodigy

Yeah, the only thing more easily impressionable than a young mind is a grand-father. ;)

designs/builds/programs all the control systems as well so they are totally autonomous!

If that's true ("totally autonomous" is a loaded term), then I know several people who would be more than happy to hire him.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

My first inclination would be to think that these are the wonderful illusions of youth.

I just speak from experience. The kind of feeling you describe is something that I have felt far too often to count. It's sort of a case of "the sun always shines before the clouds come". You always feel like you know everything before you realize you know nothing. You can feel totally indestructible right up to the point that a bus hits you in the face. You can be wholeheartedly convinced that you're an expert, until you meet a real expert and realize how far you have yet to come.

That said, it's wonderful that you feel this way and that you are proud of what you can accomplish, that's great. Just be prepared and open-minded to the possibility that you could get a hard dose of reality some day, and probably over and over again after that. In other words, keep a little reserve of humility, it might come in handy.

But who knows... maybe you are a prodigy..

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

if it is not gviging correct answer, then how it is still thread-safe?

That's related to what I pointed out about things being relative to how you define your requirements. In other words, you can adopt a "low-level" view of thread-safety and only consider that individual accesses to shared state are protected such that they are not corrupt. Or, you can adopt a more "high-level" view and specify an overall behavior that should be observed even in multi-threaded environments. The point is that this is all very subtle and multi-layered, and you always have to be careful about what the "thread-safe" label means for the specific context in which the term is being thrown around.

reentrant are functions when they're not using global variables and all. but in all your example, you make functions which are using global variables things.

I said that that reentrant functions don't have a global state, which does not forbid the use of global variables. It only forbids the use of global mutable data (state) that is changed by the function. So, read-only access to a global (const-)variable pretty much makes the global variable a parameter of the function, not a state. That's an important distinction in practice.

rubberman commented: As usual, Mike2K gives good advice. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As rubberman says, the possibility of repeated base-classes (which can be solved with virtual inheritance) is one problem. Another problem is with other ambiguities between inherited classes, like data members or member functions with the same name in the different classes (even if those members are private, the ambiguity will still be there).

Overall, the problem is that once you allow the possibility of any class being used in a multiple inheritance scheme, you have to worry about all sorts of bad interactions like the ones just mentioned. This breaks encapsulation, in the sense of having self-contained classes, and can quickly become difficult to deal with.

Also, once you allow multiple inheritance, you also have to understand that up and down casts between base and derived classes could potentially lead to (virtual) offsets being applied to the pointers (or references), which will make the use of C++ casts (as opposed to C-style casts) very important.

In other words, using multiple inheritance in a wide-spread manner requires a lot more care and diligence throughout the code, everywhere. That's why it's often discouraged (and I would discourage it too).

That said, there are tricks and specific use-cases where multiple inheritance can be a very elegant and beneficial feature. As long as it remains an occasional trick that you resort to for specific and limited purposes, it's perfectly fine.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First, there can never be too many comments on interfaces, that is, function declarations, class declarations, modules, or whatever else your language has as interfaces between the users and the library code. The only limiting factor in documenting interfaces should be the time you have and the efforts you are willing to put into writing it. And, of course, with limited time and effort, you have to prioritize by documenting more the functions that have more complicated behavior to specify (there's no point wasting too much time writing lots of documentation for a simpler getter or setter function). And, of course, this should be done using the document generation tags that are relevant to your language (like doxygen for C/C++, javadoc for Java, epydoc / sphinx for Python, so on..).

However, within the actual code, it's a whole different ball-game. In this case, it's a bit more of a matter of personal preferences, but there is also a pretty wide consensus that too many comments impede readability of the code. Always remember that programmers are much more used to reading code than reading english.

There are a number of ways to ensure that code is self-explaining and easily readable. Using good names for variables and functions goes a long way to make the logic self-evident (as a counter-example, in a code-base I work with now, they picked vague and senseless names like "Driver", "Action", "Task", "Invocation", "Instance", and "FrontEnd" for a set of related classes where none has a well-defined …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I speak quite a few languages too:

1) Quebecois (canadian french, grew up speaking it)

2) English (the main language I use today, did all my studies and relevant work in English, so, I'm fluent without much of a detectable accent (people who heard me speak English only have assumed I was either (English-)Canadian or American)

3) Swedish (my father's native language and the first language I learned, due to an early childhood in Sweden)

4) French (standard "France" french, basically the same as Quebecois but with a very different accent (or more formal), and different word and expression choices)

5) German (learned it by living in Germany for about a year, and I can get around in German and watch German movies without subtitles and things like that, but conversations I can hold are limited)

6) Spanish (learned it in school, and I can basically get around in spanish)

And a few more "bonus":

7) Finnish (lived in Finland for a year, learned it at university there (mandatory for foreign students), but I have very limited capabilities because it's a really hard language to learn)

8) Danish (if you stick a potato in my mouth and ask me to speak Swedish, then I'll basically be speaking Danish... and when I've been drunk enough, I've been able to converse with drunk Danish folks, with limited coherence ;) )

Farsi, Pashtu, Urdu

These languages must be pretty close because I've met several people who speak all three …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

thread-safe but not reentrant

Here is a simple example (in C++, cause I'm not fluent enough in C for this):

#include <atomic>

std::atomic<int> value{42};

void foo() {
  value.store( value.load() + value.load() );
}

In this case, because foo does all it's shared data (the "value" global variable) accesses, read and write, via the atomic operations, load and store, it is perfectly thread-safe in the sense that neither loads or stores could occur while some other thread is in the middle of loading or storing it too. However, if you interrupt this function after the first load operation and before the second load operation, then, the possibility that the global value undergoes some sort of change in the meantime (by, for example, calling foo again) makes the function non-reentrant because it's outcome would be different from (and inconsistent with) having executed it entirely the first time around.

But it's all a matter of expected behavior. Generally speaking, the expected behavior of the foo function here is that after foo returns, "value" ends up having twice the value that is had when foo was entered. And in that sense, it is clearly not reentrant. One could also specify the thread-safety in terms of that expected behavior and thus, say that this function is not thread-safe because it cannot guarantee this behavior. But a more basic thread-safety definition would say that any imaginable sequences of multi-threaded executions of this code (with repeated and concurrent calls to foo) has a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For me, this works on any editor (reply or new thread) and it is consistently happening for any text that spans multiple lines (wraps). If a line ends with a space, if you try to move the cursor to the place just before that space (after the last non-space character), it just jumps to the start of that line instead. If you need example post, just use this one, because I'm having that issue right now as I'm writing this post.

Version of browser:

Chromium    39.0.2171.65 (Developer Build) Ubuntu 14.04
Revision    b853bfefba0da840f4574eb3b5c7ad6e9b8573b5
OS  Linux 
Blink   537.36 (@185325)
JavaScript  V8 3.29.88.17
Flash   13.0.0.206
User Agent  Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/39.0.2171.65 Chrome/39.0.2171.65 Safari/537.36
cereal commented: same for me +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, as rubberman said, reentrancy is a more theoretical notion about formal analysis, but I would also add that in terms of multi-threading, reentrancy provides a very strong guarantee that this will work as expected when invoked concurrently (simultaneous threads that execute the same function).

You have to understand that concurrent programming (or "multi-threading") is all about the data that is shared between the concurrent threads, that is, the (mutable) data that multiple threads need access to more or less at the same time (or in some specific sequence). We call that "shared state".

By definition, a reentrant function does not have any state (mutable data) outside of itself (i.e., it can have local variables, but it doesn't access any mutable variables of wider scope, such as global variables), it's basically the first (and main) rule for a function to be reentrant. Since it does not have any state, it certainly does not have any shared state. And this is what makes it nearly fool-proof as far as using it concurrently.

But note that there is also a weaker definition reentrancy that only requires that the function does not change the value of any global data, by the end of it (e.g., it could momentarily change some global data and then restore it before exiting). But this is a weak definition that is only meaningfully "reentrant" in a single-threaded environment (if at all), and is therefore not what people usually mean when discussing reentrancy in the context of multi-threading.

The …

rubberman commented: Excellent response Mike! +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

And generally speaking, allergies and other breathing discomfort is mostly caused by the larger particles and mold in the air. The pollutants (i.e., "chemicals" as some fools might call them) that are disolved in the air might have some bad implications for your health in the long term (like carcinogens), but won't cause nearly as much allergy or asthma in the short term, meaning that it is likely the larger particles (within the cigarette smoke) that causes your bad reactions.

And as GrimJack points out, the pollutants that cigarette smoke puts into the air in any urban environment is dwarfed by all the other pollutants that city dwellers are exposed to (e.g., standing for an hour outside on a busy street corner is more harmful than being a closed room with a chain smoker for a whole day).

I'm all droopy eyed right now ... trying so hard to get a good night's sleep feeling nauseous and also have a splitting headache all thanks to the nicotine infested air ... Grrrr....

I'm not a doctor but I must point out two important and very possible alternative causes of your symptoms. First, mold problems are very common, especially in older and not-so-maintained appartment complexes. It is very possible that the cigarette smoke is simply making you more sensitive to the mold in the air, because it is far more common to have such symptoms from mold in the air than from cigarette smoke.

Second, you said things like "second-hand …

mattster commented: Are you sure you're not a doctor? ;p +0
sweetsmile commented: Thanks a lot for the elaborate reply . I think my symptoms are more physiological not pscychological +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Open source does not only apply to programming. If you consider "source" as meaning any kind of blueprints, models, design specifications, or typesetting (word document or latex files), basically anything that is the basis for making something whether it is a software, a book, or a physical product of some kind, then the open source term can apply to it, if it meets the concept of being "open". Without getting into licensing politics, the essential elements of being "open" is that it is widely available and can be modified by anyone (and maybe contributed to by anyone).

In that sense, Wikipedia is an open source encyclopedia. Anyone can access the typesetting source of wikipedia pages and can contribute to it. Any wikipedia-style page qualifies in a similar way.

There are also physical products that are open-source. For example, Arduino is an open source board design. OpenSPARC is an open-source microprocessor (from Sun's SPARC family of microprocessors).

And if you stretch things a bit (but not that much), you could say that standards (like ISO or ANSI standards) are essentially open source projects, and in fact, many of them are prefixed with "Open" for exactly that reason, like "OpenGL". Standards are available to all, can be followed by anyone wishing to do so, and can generally be contributed to openly (through committee participation or proposals given to the committee). Most standards work this way.

aren't freely available books also part of open source.

It is …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Warning: Software engineers love to provide vague definitions for every term, and then, they vehemently insist that they are very different and should not be confused for one another.

Basically, information hiding and encapsulation are, in a way, two sides of the same coin. The idea of information hiding is that your users should not know more about your code (classes, functions, etc.) than they need to to be able to use it. The idea of encapsulation is that your code shouldn't be spilling its guts in public, i.e., it should be neatly packed and hermetically sealed. Clearly, that sounds like the same thing, and it often is in practice (there are many ways, especially in C++, to implement both, and they are often the same). The critical difference is in the point of view, not so much in the actual code. They are two ideals. Information hiding is the ideal of minimizing (down to the essentials) the information about your implementation that the user is exposed to. Encapsulation is the ideal of minimizing the dependencies to external components (or other components of your library) down to only what is essential to its use. In other words, let's say you are writing component A and to write it you use component B, but the user does not need to know about that when using component A, well, information hiding tells you that he shouldn't know that A uses B, and encapsulation tells you that he shouldn't be bothered by A's …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

And their Visual Stupido on Linux is, of course, some remote access cloud-based thing that no sane programmer would ever want to use, but are so annoyingly easy to sell to the I-have-no-clue-what-my-employees-actually-do kind of managers. They probably sell this to managers by telling them that the remote access / cloud stuff allows their employees to work from anywhere on any platform.

For instance, on a 2 hour transfer at some airport on a business trip, the Micro$oft sales people will probably argue that their product allows your employees to be productive in those two hours, not mentioning that they'll need a high-bandwidth connection (the kind you rarely get at an airport), a secure connection (the kind you never get at an airport), an outlet to plug in the laptop (because their product will surely suck that battery dry in no time), a table to set it on (not to burn your lap), they'll need to connect through a VPN of some kind (and watch out for shoulder-surfing), probably wait a while for the sync to be completed, and then, they can start working, but will constantly be slowed down by all the lag in the system.

In the mean time, rubberman can be sitting in the same airport, with a small laptop that he doesn't have to plug in or take off his lap (because it's not overheating) and code away on his handy standalone light-weight code editor on his own local version of the code. If he ever …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that D might turn out to be another "Esperanto" programming language.

D seems to have tried to satisfy everyone, and therefore, pleases no one. I remember how Andrei was pitching D to C++ programmers by painting the language as essentially the same as C++ but with cleaner syntax for meta-programming, with python-style modules, and with native concurrency and parallelism constructs, and having all that while respecting the main tenants of C++ (scoping rules, value-semantics, templates, multi-paradigm programming, ease of making DSELs, etc.).

That all sounded great until they mention that D uses garbage collection, and not only that, but a "Java / .NET" style of garbage collector (a virtual machine supervising everything), instead of a "python / php" style (a reference counting scheme with a reference-cycle breaker). That's a big turn off for any C++ programmer. So, they had to backtrack a bit and provide some terrible "C++/CLI" style of managed vs. unmanaged memory. What is especially astounding is when you read their official description of garbage collection which sounds like it was written by a computer science freshman student, which is full of old myths and tales from Java echo chambers (like the fact that garbage collectors don't leak memory!! lol..). It kind of baffles me a bit because it has started to become clearer and clearer in the expert community that Java-style garbage collection is a failed experiment (along with its "checked exceptions"), and that consensus was building around the idea of Python-style collectors, …

Maritimo commented: Very well explained. Thanks. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1) Is this statement correct: In memory, elements of an array are stored "against" each other (I think more precisely, contigious space is allocated for them?) and a pointer to an array points to the first element.

Yes. This statement is correct.

2) Are there any memory-specific issues that the below code will encounter?

No (but as you said, it's incomplete). There are a number of things that could be done better, but there is no memory-related "wrong-doing".

3) If it interests me, would tools such as Valgrind worth looking into now, or should I wait until I have a better grasp of C++?

It's never too early to try out Valgrind. If you have any doubts that some piece of code might be doing something funny memory-wise, like fiddling with pointers in some dubious way, then just run your program in valgrind (it helps if you make sure to compile with debugging symbols, which is done with the option -g for GCC / MinGW). If there is anything wrong, valgrind will most probably catch it. And after all, what's the worst that could happen? The worst that could happen is that valgrind gives you a report that you don't understand, and if so, just retry later on or come here to ask us what it means, it can only help you learn better.

4) My "String List" is not complete, but are there any other glaring issues I might look into?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

According to the documentation, the setHtml() function sets the html data directly (as a QString containing the html code), and you can use toHtml() to retrieve that html code later on. If you got an html page from some URL and you dumped the html source code from that page into a QString that you passed to the setHtml() function, how could it ever be possible for the QTextBrowser object to have any idea of where that code came from?

What you need to use is the source property, which is used to get/set the URL of the page that you display in the QTextBrowser, as it is explained the documentation.

In summary, RTFM!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Then when I restarted, it was GRUB the default loader

So, that means that the boot repair utility installed Grub on the MBR. There is no easy way to go back now. You pretty much have to keep it like that until the end of time.

easyBCD still doesn't detect that grub loader.

The entries that you created with EasyBCD are now useless. Grub is now on the MBR, and your old (broken) grub that you had on the Linux partition is no longer needed. You should go into Windows and remove the EasyBCD entries (other than Windows, of course!), this way, selecting Windows under grub should lead you directly to Windows, without having the EasyBCD entries menu appear in between.

Just so that you understand what is going on, let me just draw out how it used to be, and how it is now.

Before, you had the following sequence:

Windows Boot Loader (on MBR of '/dev/sda')
 v
Windows Boot Manager (on 'C:\' drive)
  (with entries added by EasyBCD)
 v
Grub 2 (on Linux partition '/dev/sda1')
 v
Linux booted!

Now, the setup has changed to this:

For booting Windows:

Grub 2 (on MBR of '/dev/sda')
 v
Windows Boot Manager (on 'C:\' drive)
 v
Windows booted!

For booting Linux:

Grub 2 (on MBR of '/dev/sda')
 v
Linux booted!

Now that you have grub on the MBR, the path is direct to Linux. This is the "recommended" way to do things (not recommended by …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Try the boot-repair disk first.

Slavi commented: Sure =) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

then I tried also

$ sudo grub-install --root-directory=/mnt/ /dev/sda1

Be careful man! I remember telling you in a PM that you have to be careful about every step you take and not let there be things that you don't understand in the commands that you issue. What is your understanding of the option --root-directory=/mnt/?

This option comes from the instructions that I linked to that refer to installing grub from a LiveCD/USB. The instructions mount the Linux partition onto a folder that they call /mnt, and then, they direct the grub install command to that directory where the Linux partition is mounted, so that the grub installation is done for that Linux installation and not the current running instance (the LiveCD/USB).

If you have managed to boot into the Linux installation that you are trying to repair, then you must not put that option there.

When you are in the Linux installation you are trying to repair, the correct command to reinstall it on your Linux partition is:

$ sudo grub-install /dev/sda1

And if grub throws a fit about not wanting to do this "dangerous" operation (the grub developers really would prefer that you install grub as the primary bootloader, instead of chain-loading with EasyBCD, but I tend to disagree with them, because grub is too brittle, IMO, to be the main bootloader), then you have to "force" it to do it:

$ sudo grub-install /dev/sda1 --force

But I'll await the reply about the reconfiguration of …