mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with prit, it's pretty vomit-inducing.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Wow.. there's been a lot of activity here since I last checked...

@Mike The original song is "Play with fire" by The Rolling Stones

I know, but I like Kandle's version better.

if you read the comments on most of them, the downvote is because of the ignorance.

I've learned that it's better to lead people to discover their own ignorance than trying to mock it or reprimand it.

APott is the one who married D

I'm just gonna say... 50% of marriages end in divorce.. ;)

there's only 1 language that's more powerful than C, GLSL

Really?? GLSL is only for vertex and fragment shaders in OpenGL, and it is a subset of C. I would hardly consider that more powerful.

I think that what you meant to say was "GPGPU" (General Purpose computing on GPUs) with things like CUDA, OpenCL, C++ AMP, OpenACC, etc.. These are essentially extensions (and some restrictions) to C and/or C++ to be able to compile the programs to run on GPUs or a blend of CPU/GPU. What is most awesome here is the parallelism you get (if you do it correctly, which is tricky). OpenMP and Intel TBB are also great tools.

And one important thing to understand here is that these extensions or libraries are part of the C/C++ language(s), in the sense that they are part of the argument about how powerful C/C++ is. In other words, you can write normal C++ code, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you want a little more ease of abstracted code, go for D,
it's alot cleaner than C++ and compiles nicer, as my friends report.
(I personally like to call C++ the "Java" of lowest level languages) :P

I must object... I'm guessing you were betting on that (from seeing that ":P"). But don't play with me, cause you're playing with fire.

First of all, I think that the foul language is uncalled for. By foul language, I mean, of course, the word "Java". C++ does not deserve to be insulted and befouled like that.

Second, the D language shares far more similarities to Java than C++ does. In many ways, D is a kind of "Java'ified C++". D has modules, interfaces, garbage collection, finally blocks (aka the modern-day "goto") and no preprocessor. I used to love the ideas in the D language, when it first started, but I've been hating it more and more since they threw in all these stupid Java'isms.

The fact that the D language advocates call it a system language is laughable. If you are going to write system code in D, you are going to be relying on a heck of a lot of C code, to the point that you might as well just ditch the D code altogether.

And if you find that C++ is not clean and doesn't compile nice (whatever that means), then you're doing it wrong, especially since C++11/14.

if you want …

Tcll commented: nice :) +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Can you clarify what you are asking? I really don't understand this question at all.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You have a superfluous * character on line 3. This should work:

bool operator==(const image &other) const
{
    return (other.btBitmap == this->btBitmap)
}
bool operator!=(const image &other) const
{
    return !(*this == other);
}
cambalinho commented: thanks for all +3
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What have you tried so far towards solving this problem? We will not do your homework for you. You need to show that you are making efforts towards doing this yourself, and we can help you with specific issues that you are having with it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The characters that you get from getch() should be put into the "b" array, like so:

// ...
cout << "Enter Password:  ";
int i = 0;
while((i < 29) && (c != '\n'))
{
  b[i] = getch();
  cout << "*";
  ++i;
}
b[i] = '\0';
if (a[0]=='l' && a[1]=='u' && a[2]=='f' && a[3]=='f' && a[4]=='y' && b[0]=='g' && b[1]=='a' && b[2]=='r' && b[3]=='p')
{
  // ...

im using turbo c++ ide 3.0 "not dev c++".

I hope that you are aware that you are learning to use an antiquated pre-standard variant of C++ by relying on such an ancient compiler, right? Turbo C++ 3.0 came out in 1991, and the first standard for C++ came out in 1998. Turbo C++ is literally a pre-historic compiler. Are you aware that much of what you are learning by using this compiler will have to be unlearned before you can start to program in the real world? The language and practices have evolved significantly since then, including 2 standards (in 1998 and 2011), 2 revisions to the standards (2003 and 2014), and a few technical reports / specifications (2007, 2010, 2014, and more to come). Not to mention that most people consider the real birth of C++ to be around the 2001-2003 years when real modern practices were established. You are basically learning to knock stones together to make fire, while most people are driving Formula 1 cars.

Of course, your code is riddled with non-standard stuff (and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you can't use C++11, you can just use Boost.Thread, which is nearly identical to standard threads of C++11.

Apart from using a mutex?

Using a mutex directly is kind of awkward in this case (you would have to lock it during initialization and release it when done, and wait for that release in the second thread). Another typical, but also awkward, solution is to use a conditional variable, which also involves a mutex, and I find it to be awkward because you have to deal with spurious wake-ups and stuff like that.

The solution that I would recommend is a future<void>. Using Boost.Thread, you would do this:

#include <boost/thread/future.hpp>

boost::promise<void> signalFromThreadB;

// ==================================
// Thread A

boost::future<void> eventFromB = signalFromThreadB.get_future();

.. do some stuff ..

// wait for signal.
eventFromB.get();

.. do some more stuff.

// ==================================


// ==================================
// Thread B

.. do some stuff ..

if( initialization went OK )
  signalFromThreadB.set_value();
else
  signalFromThreadB.set_exception(some_exception());

.. continue.

// ==================================

And as you can notice, the future-promise mechanism is really nice because it's simple, works, and you can even communication an exception across the threads (set the exception on thread B, and it will get thrown in thread A, which is nice).

You can also use something other than void if you want to safely communicate a piece of data across the threads, along with the notification.

In your case, this might be overkill, but at the same time, using something like future-promise gives …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Isn't that a flat-out polling loop that's going to burn 100% of the spare CPU time?

It's not burning 100% CPU time because it yields its time at each iteration.

Safe, maybe, but not good practice?

It is good practice in some cases. Without more context, we can't judge it. Mainly, such spin-lock is appropriate when contingency is very low, that is, the vast majority of the time, when the loop is reached, the condition is already met, and therefore, there is no actual loop happening, it just passes the condition and moves on. In those cases, if you were to use a synchronization primitive like a mutex (or a condition-variable, or a future-promise mechanism), you would end up doing a kernel-space switch, which has a run-time cost, whether the condition is already met or not.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Volatile allows a global or shared variable to be read and written attomically.

That's not true (maybe you are confused with Java's volatiles? For Java 5 and up, "volatile" is equivalent to C++11's std::atomic). It's a common misconception (that I must admit, I used to have too). Volatile variables only mean that the compiler cannot optimize away the read / write accesses to the variable (e.g., by re-using the last cached value either in L1 cache or in registers). They used to be considered as poor-men's atomic variables because of two assumptions: (1) the program runs on a single core machine and (2) read / write operations on primitive values (bool, int, etc.) are just a single instruction. Under those conditions, a volatile primitive variable is essentially like an atomic variable because there is no need for any additional precautions except for making sure that the read/write operations are indivisible (a single instruction is indivisible) and not optimized away.

Nowadays, assumption (1) is definitely out because you can rarely find a single-core computer anywhere these days. And assumption (2) is complicated by the existence of multi-level caches and instruction pipelines in modern CPUs. So, volatile variables have sort of lost their appeal as poor-men's atomics (but not completely), because they simply don't work anymore. For instance, I onced used a volatile variable in place of an atomic in a real application, and saw sporatic failures (roughly, 1 in a million duty cycles (writes) of the variable) that completely …

rubberman commented: I was thinking pre-C++11. :-) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Unix in Neanderthal and Linux/iOS have evolved further and are better (in my opinion).

If Unix is a primitive human (e.g., neanderthal or homo erectus), and Linux is a modern human (homo sapien), then Windows must be a platypus. ;)

RikTelner commented: *shots fired* +2
rubberman commented: I vote for wooly mammoth! :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

VMs provide a pretty low-level sandbox, which means that there aren't many ways to "break out" of the VM, because within the VM there isn't much that you could see from the host. With a basic setup, there is, as far as I know, nothing that transpires between them. However, there are a number of additional features for things like being able to access the host's file-system (folders) through the VM, which might be convenient sometimes, which could allow for an infection to spread, but it would generally require that the virus in question be designed to be able to do that, which is unlikely. And, of course, if you avoid using those features, and thus, keep your VM very basic / isolated, then there's no danger with that. In fact, it would be really difficult for a virus to even detect that it is running within a VM, let alone break out of it.

Another possibility is on the networking side of things. The VM more or less acts like any computer on your local network. Assuming that you are protecting your local network with a router-based firewall and port blocking, or even a DMZ, then if anything infects a computer on your local network, those defences are useless against anything coming from that infected computer. However, this technique is almost exclusively used in deliberate targeted attack against a computer or network. This is not something that an ordinary virus would do. And also, there are ways to protect …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

No. Unix pre-dates Linux by a lot. Unix was basically one of the very first operating systems, from the earliest days of computing. Along with a few other early and often related operating systems, most notably BSD, they set the industry standard for what operating systems are and how they work. A large part of that became the formal standard called POSIX. And today, the vast majority of all operating systems in existence follow that standard (or nearly so), that includes Mac OSX, Linux, Solaris, QNX, several BSD derivatives, Android, and a number of specialized operating systems (if they're not using Linux).

In fact, Linus Torvalds explained many times that he basically created Linux because he had worked with and studied Unix systems and thought they were awesome and wanted one for his personal computer, but couldn't afford it (a license for such an early industrial-grade OS is expensive, only companies and institutions could afford it). So, he basically wrote an OS from scratch that could act as a drop-in replacement for Unix, and so it is. By all accounts, Linux is an alternative (and open-source) implementation of Unix. And now, Linux is by far more wide-spread than Unix, making it the lead figure or most visible representative of this Unix family of operating systems.

Today, all of the systems that follow this Unix / POSIX standard are collectively referred to as "Unix-like" systems, because from the perspective of writing applications, programs and scripts, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You don't need to do anything to access the current working directory... by definition, the current working directory is the directory that you are currently accessing.

If you want to know the full path of the current working directory, you can just use the pwd command, as it explains here.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem seems to be that you are not linking against the correct library. You need to link against the gpu-enabled library, as shown here.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When you open a file to read from it, then it will not open successfully if the file does not exist. You can check if the file has been opened successfully with the is_open() function, like this:

if( rstudents.is_open() )

which you could use in a loop like this:

while( !rstudents.is_open() ) {
    cout << "FileName = " << endl;
    cin >> FileName;
    rstudents.open(FileName);
};

to which you could also add an error message to tell the user that the file (probably) didn't exist (I say 'probably' because there could be another reasons why a file cannot be opened, but those are rare).

Also, it is generally a good idea to check that a stream is in a good state, which you can simply do with something like if( !rstudents ) which will check if the rstudents stream is in a bad state (not good), such as having reached the end of the file or having failed to do the last read-operation, for whatever reason.

Also, the C++ standard header for the C stdlib.h header is cstdlib, as in #include <cstdlib>.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Asking how to generate 64bit executables using TurboC is like asking how to send Instagrams with a rotary phone.

Just download CodeBlocks or Visual Studio Express, both are free.

Slavi commented: lol'd +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Mac's are definitely very common among computer science or computer engineering students. I'm not sure they're really the best.

I would recommend a PC laptop on which you install Linux for a dual boot with Windows. Windows is better than Mac for using the engineering software you might need for your courses and projects. And Linux is a much better development environment (for programming tasks) than Mac, although Mac isn't bad either, anything is better than Windows in that department. And at the end of the day, a PC laptop will be a lot cheaper than an equivalent Mac laptop, and I assume that for you, as a student, money is a factor.

I even know people in this field who bought a Mac, thinking it would be appropriate for their work, and ended up replacing Mac OSX with Windows, and installing Linux on the side (dual boot).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Besides a few crazy little tricks that rely on goto's, there are two canonical use-cases for goto's. The goto's are not absolutely necessary in those cases (you could do without them), but they are not too harmful and make things somewhat simpler. That said, the gag reflex of most programmers to seeing a goto, or worse, writing one, is very well justified in general.

CASE 1
The first canonical case is the "break out of multiple loops" problem. This boils down to wanting to do a break that drops you out of more than one nested loop. Like this:

void foo() {
  // ...

  for(int i = 0; i < N; ++i) {
    for(int j = 0; j < M; ++j) {
      //..
      if ( something was found )
        goto end_of_loops;
    };
  };
end_of_loops:

  // ...
};

This is not too bad, because it's pretty safe and easy to understand (doesn't make "spaghetti code"). And the alternatives are not particularly nice.

One alternative is to use a flag to relay the break to the outer loop:

void foo() {
  // ...

  for(int i = 0; i < N; ++i) {
    bool should_break = false;
    for(int j = 0; j < M; ++j) {
      //..
      if ( something was found ) {
        should_break = true;
        break;
      };
    };
    if( should_break )
      break;
  };

  // ...
};

Which is a lot of additional code that is just "noise", and just to avoid the infamous "goto".

Another alternative is to …

rubberman commented: Nice writeup Mike +12
ddanbe commented: Thorough explanation, showing deep knowledge. +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This depends on which compiler you are using.

If you use Visual Studio (presumably on a 32bit Windows), then you can install a 64bit cross-compiler (and the necessary auxiliary tools and libraries), as it says here. Of course, if you use a version of Visual Studio that is older than 2008, then you really should update it, because, as far as I'm concerned, any version prior to 2008 is completely unusable (too sub-standard, poor performing, and feature-deprived).

If you are using MinGW (GCC), then you need to use MinGW-w64 which is a fork for mingw that supports both 32bit and 64bit for both host (what you are running on) and target (what you are compiling for).

If you are using any other Windows compiler (Intel? IBM? Borland?), then you would have to check with those vendors what is possible.

Needless to say, if you are not working under Windows (e.g., you are working in Linux or Mac OSX), then this is impossible because these systems use completely different executable formats ("ELF" format, for all Unix-like systems), so, obviously that won't work in Windows. I don't know of any easy way to compile Windows executables from a non-Windows system (i.e., a Unix-like system), I suspect that setting this up is not for the faint of hearts.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

C and C++ differ by two plus signs.
C and C# differ by one pound character.
C++ and C# differ by the replacement of two plus signs by a pound character.

Was that the answer you were looking for? Because I can't tell what you want us to explain other than that, because you didn't specify what differences you want to know about.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First of all, you have to make sure that you compiled the project with debugging symbols enabled. How you do that would depend on the project itself and the way it has its makefiles setup.

Then, to trace back the origin of segmentation faults, gdb is not a good tool for that. The standard tool for this type of debugging is Valgrind with the memcheck tool. When you have debugging symbols enabled and all that, you'll get a nice line-by-line trace-back of where the segmentation fault came from.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are high-level languages that can do low-level things. C++ is the main one.

Part of the problem is that many people label a language as being low-level when it allows low-level things to be done with it (direct memory accesses, memory casts, kernel-space programming, etc..). That is basically a way to exclude any such language from their category of "high-level" languages.

If you define the "high-level" language category in an inclusive manner (as opposed to an exclusive manner) by specifying the kinds of high-level abstractions that the language should provide to be considered as high-level. Then, languages like C++ certainly qualify to that category. There are many other languages in the list of system programming languages that would fit this kind of definition of high-level languages, like D, Go, Ada, and Rust.

And remember, "high-level" and "low-level" are relative terms. C used to be classified as high-level, back when people used assembler or assembler-like languages. And C++ only started to be called "low-level" when some people decreed that pointers were evil, even though C++ supports all the main high-level programming paradigms that exist.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that you overestimate our credulity. Given your current geo-location (which I can trace, but won't disclose), you have a 1,000 mile commute to do everyday to get to the MIT campus. Hmm... I'm starting to think "liar, liar, pants on fire!".

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I know. All of this.
I'm also understand logic VERY well. In fact I'm a Junior ..

But apparently, you don't understand punctuation, sentence structure, proper vs. common nouns, and grammar. ;)

Stuugie commented: lol +6
Slavi commented: +1 :D +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1.Does it runs chromium browser.

Unless it is a very old version of a distro, any Linux distro will run Chromium just fine. Some distros don't like Chrome because of ideological reasons (not FOSS), but you can override that if you really want Chrome instead of Chromium (but they are essentially identical).

What can be a bit more problematic is flash plugins from chromium (to be able to watch videos). I've had a few problems from time to time with that. That's the kind of thing for which it pays off to use the main (non-derivative) main-stream up-to-date distributions (Ubuntu, Fedora, etc..), because when it breaks, it doesn't break for long and solutions are easily found because of the large community of users.

For example, I used to have Elementary-OS, which is a Ubuntu derivative that tried to be very "stable" by sync'ing with relatively old (and "stable") versions of applications (and the kernel). But when I had problems with Chromium and with the flash plugin (from an Adobe update), there was basically no fix because of how old most of Elementary-OS's stuff was. So, I changed to another light-weight distro, Lubuntu, that is actually up-to-date with the official Ubuntu, solving all my problems.

2.Does it runs games like Half life and other popular games.

Before I tackle this, I have to point out that the general rule for applications in Linux is that if it runs on one distro it works on any distro. Linux …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Has Lisp ever really been anything more than an academic language with very little real-life applicability?

I know that some software use Lisp-based syntax for their scripting language, and that there are tons of dialects of Lisp, but that's mainly because it's super easy to parse (i.e., people who like to write interpreters and compilers look for the easiest syntax to parse, so that they won't have to go through too much trouble writing the parser for it).

There are some corner markets (like early AI) that favored Lisp or Lisp-like languages, but they are pretty small and don't really have a mountain of legacy Lisp code to maintain. And as far as I know, these domains have largely moved on from using Lisp a good while back.

Most people that I've heard talk seriously about Lisp were (1) very old and (2) teaching at a computer science department. That's a hint that it's just an old academic language. It probably has some historical importance to computer science and has some cool / smart features (all academic languages do). But to be a "legacy" language, it takes more than just being old or interesting, it needs to still be widely used (even when most would want to see it disappear, often especially those who work with it every day). Languages like Fortran, C, COBOL, Ada, SPARK, Haskell, etc..., qualify (but some are debatable) because, even though they are old and in some sense "obsolete", they are still widely used (in …

Traevel commented: Ah Lisp, the Prolog of the America's. +6
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

My grandson is a prodigy

Yeah, the only thing more easily impressionable than a young mind is a grand-father. ;)

designs/builds/programs all the control systems as well so they are totally autonomous!

If that's true ("totally autonomous" is a loaded term), then I know several people who would be more than happy to hire him.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Lubuntu follows the same support schedule as the main Ubuntu. You can get the latest LTS (Long Term Support) version, which is 14.04, which has official support until 2019.

or i have to update system daily...!

Not sure what you mean. You get automatically notified of any software, OS, or kernel updates available. It is true that they come pretty regularly, almost daily, but that's just because the open-source world is very dynamic and responsive to bug-fixes (and especially security vulnerabilities, if any). Most of the time, you don't have to restart your computer when updates are done. It is really only kernel updates (that come maybe once per month, approximately) that require a reboot. The only reason why it seems that you constantly get updates in Linux is because all the software (applications) updates are channeled through the package manager (software center). By contrast with Windows, Windows updates only Windows itself (kernel and its other bloated OS "frameworks") but does not update any of the applications, which means that you typically just have the version of the applications that you originally installed, with virtually no updates. In Linux, all applications (and the libraries they rely on) get regularly updated, but like I said, most of them don't require a system reboot, all you have to do is click on the icon, approve the updates and go about your business as usual. And if you don't want to bother with updates, just don't install them (or install them …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Headers like "conio.h" and "graphics.h" are extremely old and obsolete headers from another era. I think that they are too old even for an ancient IDE like Dev-C++ 4.9.9.2. The direct graphics (GDI) that this library uses has been deprecated for quite some time. I think there is probably still a way to use them through backward-compatible libraries provided by Windows, but I'm not sure what they are or how to link to them (or if you even can without special measures taken).

People simply don't use this stuff anymore, and haven't for at least a decade or two.

Also, using <stdlib.h> and <stdio.h> is not standard C++, the standard equivalent of those libraries are <cstdlib> and <cstdio>. The code that you have there is what we call "pre-standard C++", which dates back to the early 90s, before the first standard for C++ (in 1998). The biggest problem really is that you are basically doing software archeology (looking at legacy code from an ancient and forgotten era of programming civilization). And for such a trivial piece of code, I don't see the point in trying to revive it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Cesar Romero

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I have an old laptop from roughly the same generation (similar specs, core 2 duo and 1 GB of RAM), and I run Lubuntu on it, which is a light-weight variant of Ubuntu that works very well for such older hardware. That's what I would recommend for your hardware. There are some variants of Lubuntu too, like Peppermint OS.

There are some other variants that might work too. If you really want to avoid Ubuntu variants (why?), then you could go for the LXDE spin of Fedora, which is kind of the fedora equivalent to Lubuntu.

Crunchbang Linux is another popular "hack'ish" distro for older hardware.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

My first inclination would be to think that these are the wonderful illusions of youth.

I just speak from experience. The kind of feeling you describe is something that I have felt far too often to count. It's sort of a case of "the sun always shines before the clouds come". You always feel like you know everything before you realize you know nothing. You can feel totally indestructible right up to the point that a bus hits you in the face. You can be wholeheartedly convinced that you're an expert, until you meet a real expert and realize how far you have yet to come.

That said, it's wonderful that you feel this way and that you are proud of what you can accomplish, that's great. Just be prepared and open-minded to the possibility that you could get a hard dose of reality some day, and probably over and over again after that. In other words, keep a little reserve of humility, it might come in handy.

But who knows... maybe you are a prodigy..

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Venus also has an atmosphere much thicker than Earth's which is why it is even hotter than Mercury on the surface even though Mercury is closer to the sun.

Right. My point exactly. There is a lot more to it than what the original article considers, which is very simplistic. NASA has a much nicer set of web-pages explaining the whole energy balance. It is clear that the current state of affairs is a fine balance of temperatures, climate effects and chemistry of the Earth and atmosphere.

Abruptly bumping up the amount of solar energy coming to the Earth would completely throw this out of wack. Given the very short time scale (few weeks), there's no chance for much of a new balance to establish itself, but the excess heat will still have to go somewhere, probably in the most volatile and shallow absorbers and reactors. It would probably cause a lot more surface-water evaporation. Also, things that can burn easily would probably do so in mass. And at that point, the chemistry of the atmosphere would change significantly, and there's no telling for sure what would happen.

Earth has a lot more gravity than a comet so even the increased friction & solar wind probably won't do much more than blow off some of the atmosphere

Gravity is a pretty weak force, and doesn't really matter much here. The main thing that keeps the atmosphere from blowing away is …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that the temperature estimates are a bit naive. They seem to be obtained by assuming that the surface temperature is proportional to the amount of radiation it receives from the Sun, therefore leading to a inverse-square to distance law for the temperature, starting from the current position and temperature of the Earth. Clearly, that's not a good model because it estimates that the average temperature would be around 76C when crossing Venus' orbit, while Venus itself is at more than 450C (on the sunny side).

For one, the increase of solar radiation implies an increase of ionizing solar radiation, which would significantly change the topology of the atmosphere (pushing the ionosphere much closer to the surface) and resulting in a lot more UV radiation on the surface as well. Also, closer proximity to the Sun implies much more intense solar winds, which could basically blow away much of the atmosphere and overpower the Earth's magnetic field.

Also, as the Earth picks up speed, it will also undergo a lot of friction (aerodynamic drag with the surrounding solar winds and interstellar medium). The Earth will basically turn into a comet, and most of its volatile surface material will be stripped and blown away, that includes the atmosphere, the water, most of the soft crust (e.g. sediments), and obviously, all of us.

It would be pretty hard to predict or anticipate the exact way that all this would play out or a specific time-line, but I would say that we …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

accessing non-constant global variables makes the function non-rentrant?

No. It is not a matter of whether the global variables are marked as "const" or not (in fact, some languages don't have that concept at all). It is a matter of what the function does with the global variable, if it only reads it ("read-only access"), then it can be reentrant, but if it reads and writes to it, then it cannot be reentrant (under the more strict definitions).

Think about it this way. If you interrupt a function at any arbitrary point, what is all the information (value of variables) you need to gather in order to be able to restart the function from the point where it was interrupted? Some data participate to the computations in the function, but remain unchanged throughout the execution of the function, these are called "parameters" of the function. Some data are modified throughout the execution of the function as it progresses (e.g., loop counter, summation value, intermediate values of calculations, etc.), and these are called "states" of the function. The parameters characterize the function call and determine what you should expect as a result of calling it. The states characterize the progress-point of a function's execution. So, if you interrupted a function execution at some point and recorded all of its states, then you could restart / resume that function execution from the same point later on by simply reloading the state. A reentrant function is one which only has local …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is funny because there was just a guy presenting a fuzzing tool for testing Clang (the C / C++ / Obj-C compiler), check it out here. This is really cool stuff, especially the afl-fuzz tool.

I want to avoid to test some program which is widely used or critical for the system because there is almost certain it is regularly fuzzed, so finding a bug would be uncertain and I would spend lot of time creating a fuzzer with no result.

I don't think that fuzzing it done as routinely as you think or would hope it is. And as far as "widely used or critical" software not having bugs to be discovered this way is also a somewhat optimistic / naive view of things. Complex systems are complex, there are few pieces of complex software that are not full of bugs.

Surely, if you consider security-critical software, then there is probably a high chance that they are being fuzzed regularly. But lots of other tools are not security-critical and therefore, just deal with bug reports from users (and use only the more basic self-tests like unit-tests).

Problem is that I am not sure what exact program to fuzz.

Anything that is based on doing complex parsing tasks is certainly going to trigger a huge load of fuzz failure reports. I wouldn't be surprized if even some of the more well-establish tools like "grep", "sed", "awk", and so on have a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

if it is not gviging correct answer, then how it is still thread-safe?

That's related to what I pointed out about things being relative to how you define your requirements. In other words, you can adopt a "low-level" view of thread-safety and only consider that individual accesses to shared state are protected such that they are not corrupt. Or, you can adopt a more "high-level" view and specify an overall behavior that should be observed even in multi-threaded environments. The point is that this is all very subtle and multi-layered, and you always have to be careful about what the "thread-safe" label means for the specific context in which the term is being thrown around.

reentrant are functions when they're not using global variables and all. but in all your example, you make functions which are using global variables things.

I said that that reentrant functions don't have a global state, which does not forbid the use of global variables. It only forbids the use of global mutable data (state) that is changed by the function. So, read-only access to a global (const-)variable pretty much makes the global variable a parameter of the function, not a state. That's an important distinction in practice.

rubberman commented: As usual, Mike2K gives good advice. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Rev Jim said:

As I type, the edit window automatically grows, however, when it grows large enough for a scroll bar to appear, the control does not auto-scroll to keep what I am typing visible.

Sounds like Problem 3 was not solved after all. I haven't had that problem since the fix of problem 2, but I guess it's back. Is this reported behavior recently occurring Jim? Because for me, this issue disappeared with the fix to problem 2 a few days ago.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As rubberman says, the possibility of repeated base-classes (which can be solved with virtual inheritance) is one problem. Another problem is with other ambiguities between inherited classes, like data members or member functions with the same name in the different classes (even if those members are private, the ambiguity will still be there).

Overall, the problem is that once you allow the possibility of any class being used in a multiple inheritance scheme, you have to worry about all sorts of bad interactions like the ones just mentioned. This breaks encapsulation, in the sense of having self-contained classes, and can quickly become difficult to deal with.

Also, once you allow multiple inheritance, you also have to understand that up and down casts between base and derived classes could potentially lead to (virtual) offsets being applied to the pointers (or references), which will make the use of C++ casts (as opposed to C-style casts) very important.

In other words, using multiple inheritance in a wide-spread manner requires a lot more care and diligence throughout the code, everywhere. That's why it's often discouraged (and I would discourage it too).

That said, there are tricks and specific use-cases where multiple inheritance can be a very elegant and beneficial feature. As long as it remains an occasional trick that you resort to for specific and limited purposes, it's perfectly fine.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First, there can never be too many comments on interfaces, that is, function declarations, class declarations, modules, or whatever else your language has as interfaces between the users and the library code. The only limiting factor in documenting interfaces should be the time you have and the efforts you are willing to put into writing it. And, of course, with limited time and effort, you have to prioritize by documenting more the functions that have more complicated behavior to specify (there's no point wasting too much time writing lots of documentation for a simpler getter or setter function). And, of course, this should be done using the document generation tags that are relevant to your language (like doxygen for C/C++, javadoc for Java, epydoc / sphinx for Python, so on..).

However, within the actual code, it's a whole different ball-game. In this case, it's a bit more of a matter of personal preferences, but there is also a pretty wide consensus that too many comments impede readability of the code. Always remember that programmers are much more used to reading code than reading english.

There are a number of ways to ensure that code is self-explaining and easily readable. Using good names for variables and functions goes a long way to make the logic self-evident (as a counter-example, in a code-base I work with now, they picked vague and senseless names like "Driver", "Action", "Task", "Invocation", "Instance", and "FrontEnd" for a set of related classes where none has a well-defined …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I speak quite a few languages too:

1) Quebecois (canadian french, grew up speaking it)

2) English (the main language I use today, did all my studies and relevant work in English, so, I'm fluent without much of a detectable accent (people who heard me speak English only have assumed I was either (English-)Canadian or American)

3) Swedish (my father's native language and the first language I learned, due to an early childhood in Sweden)

4) French (standard "France" french, basically the same as Quebecois but with a very different accent (or more formal), and different word and expression choices)

5) German (learned it by living in Germany for about a year, and I can get around in German and watch German movies without subtitles and things like that, but conversations I can hold are limited)

6) Spanish (learned it in school, and I can basically get around in spanish)

And a few more "bonus":

7) Finnish (lived in Finland for a year, learned it at university there (mandatory for foreign students), but I have very limited capabilities because it's a really hard language to learn)

8) Danish (if you stick a potato in my mouth and ask me to speak Swedish, then I'll basically be speaking Danish... and when I've been drunk enough, I've been able to converse with drunk Danish folks, with limited coherence ;) )

Farsi, Pashtu, Urdu

These languages must be pretty close because I've met several people who speak all three …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Problem 3 seems to have been solved too! At least, as far as I can see from testing it just now. I'll let you know if it resurfaces.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

thread-safe but not reentrant

Here is a simple example (in C++, cause I'm not fluent enough in C for this):

#include <atomic>

std::atomic<int> value{42};

void foo() {
  value.store( value.load() + value.load() );
}

In this case, because foo does all it's shared data (the "value" global variable) accesses, read and write, via the atomic operations, load and store, it is perfectly thread-safe in the sense that neither loads or stores could occur while some other thread is in the middle of loading or storing it too. However, if you interrupt this function after the first load operation and before the second load operation, then, the possibility that the global value undergoes some sort of change in the meantime (by, for example, calling foo again) makes the function non-reentrant because it's outcome would be different from (and inconsistent with) having executed it entirely the first time around.

But it's all a matter of expected behavior. Generally speaking, the expected behavior of the foo function here is that after foo returns, "value" ends up having twice the value that is had when foo was entered. And in that sense, it is clearly not reentrant. One could also specify the thread-safety in terms of that expected behavior and thus, say that this function is not thread-safe because it cannot guarantee this behavior. But a more basic thread-safety definition would say that any imaginable sequences of multi-threaded executions of this code (with repeated and concurrent calls to foo) has a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For me, this works on any editor (reply or new thread) and it is consistently happening for any text that spans multiple lines (wraps). If a line ends with a space, if you try to move the cursor to the place just before that space (after the last non-space character), it just jumps to the start of that line instead. If you need example post, just use this one, because I'm having that issue right now as I'm writing this post.

Version of browser:

Chromium    39.0.2171.65 (Developer Build) Ubuntu 14.04
Revision    b853bfefba0da840f4574eb3b5c7ad6e9b8573b5
OS  Linux 
Blink   537.36 (@185325)
JavaScript  V8 3.29.88.17
Flash   13.0.0.206
User Agent  Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/39.0.2171.65 Chrome/39.0.2171.65 Safari/537.36
cereal commented: same for me +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I use KDevelop. Great code-completion, great plugins and build system support, decent debugging capabilities, and much more.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, as rubberman said, reentrancy is a more theoretical notion about formal analysis, but I would also add that in terms of multi-threading, reentrancy provides a very strong guarantee that this will work as expected when invoked concurrently (simultaneous threads that execute the same function).

You have to understand that concurrent programming (or "multi-threading") is all about the data that is shared between the concurrent threads, that is, the (mutable) data that multiple threads need access to more or less at the same time (or in some specific sequence). We call that "shared state".

By definition, a reentrant function does not have any state (mutable data) outside of itself (i.e., it can have local variables, but it doesn't access any mutable variables of wider scope, such as global variables), it's basically the first (and main) rule for a function to be reentrant. Since it does not have any state, it certainly does not have any shared state. And this is what makes it nearly fool-proof as far as using it concurrently.

But note that there is also a weaker definition reentrancy that only requires that the function does not change the value of any global data, by the end of it (e.g., it could momentarily change some global data and then restore it before exiting). But this is a weak definition that is only meaningfully "reentrant" in a single-threaded environment (if at all), and is therefore not what people usually mean when discussing reentrancy in the context of multi-threading.

The …

rubberman commented: Excellent response Mike! +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Like rubberman says, SSH can be an annoying method. The problem with trying to rely on SSH connections via GUI applications (like a file explorer) is that SSH can sometimes issue warnings about keys that have changed or otherwise ask for additional steps. In some cases also, the GUI application uses a different user-id, which causes more issues in some setups (like with RSA keys or key-chains).

One alternative is to rely on a simpler (less secure) protocol like telnet or ftp, as rubberman suggests.

Another alternative, if you don't want to compromise on security, is to use something like SSHFS, which allows you to mount a remote file-system through SSH. In that case, you would just mount the remote file-system through the terminal, and then that folder (destination of mount) can be accessed by your GUI file explorer (or any other program) just as a normal folder. And because the mounting is done manually in the terminal, the issues with having a GUI deal with the SSH connection are eliminated.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

And generally speaking, allergies and other breathing discomfort is mostly caused by the larger particles and mold in the air. The pollutants (i.e., "chemicals" as some fools might call them) that are disolved in the air might have some bad implications for your health in the long term (like carcinogens), but won't cause nearly as much allergy or asthma in the short term, meaning that it is likely the larger particles (within the cigarette smoke) that causes your bad reactions.

And as GrimJack points out, the pollutants that cigarette smoke puts into the air in any urban environment is dwarfed by all the other pollutants that city dwellers are exposed to (e.g., standing for an hour outside on a busy street corner is more harmful than being a closed room with a chain smoker for a whole day).

I'm all droopy eyed right now ... trying so hard to get a good night's sleep feeling nauseous and also have a splitting headache all thanks to the nicotine infested air ... Grrrr....

I'm not a doctor but I must point out two important and very possible alternative causes of your symptoms. First, mold problems are very common, especially in older and not-so-maintained appartment complexes. It is very possible that the cigarette smoke is simply making you more sensitive to the mold in the air, because it is far more common to have such symptoms from mold in the air than from cigarette smoke.

Second, you said things like "second-hand …

mattster commented: Are you sure you're not a doctor? ;p +0
sweetsmile commented: Thanks a lot for the elaborate reply . I think my symptoms are more physiological not pscychological +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Open source does not only apply to programming. If you consider "source" as meaning any kind of blueprints, models, design specifications, or typesetting (word document or latex files), basically anything that is the basis for making something whether it is a software, a book, or a physical product of some kind, then the open source term can apply to it, if it meets the concept of being "open". Without getting into licensing politics, the essential elements of being "open" is that it is widely available and can be modified by anyone (and maybe contributed to by anyone).

In that sense, Wikipedia is an open source encyclopedia. Anyone can access the typesetting source of wikipedia pages and can contribute to it. Any wikipedia-style page qualifies in a similar way.

There are also physical products that are open-source. For example, Arduino is an open source board design. OpenSPARC is an open-source microprocessor (from Sun's SPARC family of microprocessors).

And if you stretch things a bit (but not that much), you could say that standards (like ISO or ANSI standards) are essentially open source projects, and in fact, many of them are prefixed with "Open" for exactly that reason, like "OpenGL". Standards are available to all, can be followed by anyone wishing to do so, and can generally be contributed to openly (through committee participation or proposals given to the committee). Most standards work this way.

aren't freely available books also part of open source.

It is …