mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I also use EasyBCD for my dual boots. From time to time, mostly after distribution upgrades (e.g., 13.10 -> 14.04), the link from the EasyBCD-configured bootloader to grub2 is broken by an update of grub2. Every time this has happened, the fix was simply to boot into Windows and use EasyBCD (an up-to-date version of it) to remove the broken Linux entry and re-create it again. I've never had problems with that.

If things are still broken after you have fresh entries from EasyBCD, then it might mean that your grub2 installation or configuration is corrupt. To fix that, you can basically follow these debian-family instructions here, except that you need to point the grub installation to the partition (not the hard drive) where you originally put grub2 (I assume it's on your Linux partition), so, you should do $ sudo grub-install --root-directory=/mnt/ /dev/sda1 (replacing "sda1" with whatever is the correct device identifier for your Linux partition), as opposed to what the instructions say about using "sda" or similar, which has the effect of installing grub on the MBR of the hard drive, which is not what you want.

After you've reinstalled grub2, you might have to go back to Windows again to recreate the Linux entry with EasyBCD.

Slavi commented: Thank you Mikey! +5
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I've always found the Pro Git book, freely available online here, to be very helpful and definitely good for beginners.

The thing with Git is that you'll eventually get an epiphany and it'll all become clear. It's mainly about understanding how diffs, commits and branches work together.

One thing though is that most tutorials on Git will at least assume that you are already familiar with some other version control system, like cvs, svn, mercurial, bazaar, etc.. So, that might be a bigger problem if you are not already familiar with any of those. If you lack the high-level conceptual understand of how you use version control and the general day-to-day work-flows with them.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I can confirm the behavior that cereal described as "Problem 2". I get the same weird jumpy cursor issue. I'm also on Chrome (Chromium, actually) in Kubuntu.

Problem 3

Another issue that I have noticed is that, as you all know, I tend to write long posts. The editor box starts small and expands with every additional line until some point when a scroll-bar appears on the side. At that point, the editor does not automatically scroll down as I write, meaning that every new line that I write ends up below the bottom edge of the editor box, until I manually scroll down some more to see what I'm writing.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The term API is a bit of a fuzzy term today. Back in the early days of computing, you had an operating system and you had applications. The company making the operating system would ship it along with a document that specified all the functions (in C) that the operating system provided and that applications could be programmed to call to perform certain tasks (e.g., opening files, opening network connections, etc..). This set of functions became known as the "Application Programming Interface" (API) because it was all the things that application programmers needed (or were allowed) to know about the operating system to be able to interact with it.

Today, the number of layers of software has greatly increased and diversified, and any typical system has a whole ecosystem to libraries, languages, interpreters, utility programs, and so on. So, the term API has taken a much more general meaning, which should really be just "interface", dropping the AP part, which was only meaningful in the simple OS vs. Apps paradigm.

So, the broad definition of what an API is is still pretty much the same as before. If you write code that you intend to package up as a library to be used by other libraries or applications, then you will naturally have to determine how the users of your library are supposed to be using it. This includes figuring out what they should know and be able to do, and what should be hidden from them (a general concept …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The easiest is probably to use chown to change the owner of the directory after you've extracted it. And you can use chmod too to remove some permissions for group and others too.

Let's say you have the file my_archive.tar.gz which contains a top-level directory called my_folder, and the user you want to make the owner of the folder is called "myself". Then, you can do this:

$ sudo su
# tar -xzf my_archive.tar.gz
# chown -R myself my_folder
# chmod -R go-w my_folder
# exit
$

This goes into superuser mode. Extracts the archive into the current directory (use cd to navigate to where you want to extract it first). Changes the ownership recursively (-R) to the user myself. Changes the access permissions recursively such that write access is removed from the "group" and "others", such that the read and execute permissions are preserved, and all permissions are preserved for the owner. Finally, it exits the superuser mode (you don't want to stay in that mode any longer than you need to).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Libraries on Linux are usually located in /usr/lib and /usr/local/lib, or some 32-64 variants of that. Usually, you should be able to simply pass the option -lSDL to the compiler, and it will find it for you (at least, on my system, this works fine). It works if I take your code and run this:

$ g++ simple_sdl_test.cpp -lSDL -o simple_sdl_test

If not, you can also use a simple locate command to find what you are looking for, like $ locate libSDL, which should print out all files on your system that contain this name, for example, I get this on my system:

$ locate libSDL
/usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0
/usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0.11.4
/usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0
/usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0.2.0
/usr/lib/x86_64-linux-gnu/libSDL_image-1.2.so.0
/usr/lib/x86_64-linux-gnu/libSDL_image-1.2.so.0.8.4
$ 

But normally, with Linux projects, you would setup a cmake script that will automatically find the libraries and header files you need (by recursively looking for the best matches (and version numbers can be specified too) in the most likely folders). Something like this in a file called "CMakeLists.txt":

# Anounce your required cmake version:
cmake_minimum_required(VERSION 2.8)
# Name your project:
project(SimpleSDLTest)

# Ask cmake to locate the SDL library:
find_package(SDL REQUIRED)

# Add SDL's header file directory to the includes:
include_directories(${SDL_INCLUDE_DIR})

# Create an executable target from your source(s):
add_executable(simple_sdl_test simple_sdl_test.cpp)

# Tell cmake to link your target with the SDL libs:
target_link_libraries(simple_sdl_test ${SDL_LIBRARY})

Where the simple_sdl_test.cpp would be your source file, as so:

#include <SDL.h>
#include <cstdio>
#include <cstdlib>

SDL_Surface* g_pMainSurface = NULL;
SDL_Event g_Event;

int …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Both programs actually compile fine. The error you got for the first program is a linker error, meaning that it compiled fine but it couldn't link. The error in question is caused by forgetting to link with the SDL library. It is not sufficient to include the header file, you also have to link to the library. Just look at the FAQ for SDL.

If you are confused with the compilation and linking process, I recommend reading my tutorial on that subject.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Like others, I'm not a big buff of poetry. I guess I get my dose of poetry from well-written song lyrics.

English is not my first language, so that plays into it too.

I do remember, however, that during the teenage period when I was a hardcore fan of The Doors, I got a copy of Jim Morrison's poem collection book and found it pretty wonderful, and much better than the lyrics of his songs, which were already pretty good to begin with.

Other than that, I haven't really delved too much into poetry beyond what I was forced to in English class, which was pretty rudimentary, given that they were "second language" courses.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well deserved! You're really a shining star in your profession. We need more knowledgeable and down-to-earth reporting like yours.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

COBOL!! At least, that's the legacy language for banking, accounting and POS (point-of-sale) software. Hopefully though, I think this is phasing out because COBOL is a language universally known for being horrible.

I don't know much about banking software (e.g., software that bank managers might use or the back-ends that do transactions and all that). But these types of applications are not particularly special, just basic GUI applications, but with lots of panels and parameters to set. The more crucial part is the back-end, which is normally handled with database servers and secure protocols to communicate between them. I'm not sure what language is the most popular for this, but I imagine that C++ is a large part of it.

Financial companies are a completely different beast. Their software needs are more along the lines of big data analytics... basically trying to predict stock prices or evaluate risks (when they are not off-loading it on the government by claiming to be "too big to jail"). For those tasks, they mostly hire mathematicians and engineers for their mathematical and problem solving skills, which is more important than programming skills in this domain. From what I've seen, they seem to mostly use C++, and sometimes Java, in this field.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It all depends on how you read the files. In general, with buffered reading, like with ifstream, the stream will read blocks of data ahead of what you are currently reading. The reason for this is to minimize the time usage, because it's more efficient to read substantial blocks of data from the files than it is to read it byte-for-byte (unbuffered). Classes like ifstream use heuristics to determine how much data should be read into the buffer. "Heuristics" just means simple rules that perform well in practice.

I have never tested the buffering behavior of ifstream, but I would not be surprised if it reads 1MB or more at a time from the files.

For example, one way to come up with a heuristic for buffering is to look at the input and output latency (time to wait to get data). Waiting for data from the hard-drive can easily take millions of clock cycles on typical systems. Getting RAM data that is already cached (in cache memory) can take maybe around 50 clock cycles (but it depends on the level of cache and many other things). So, you have a ratio of about 1 million to 1 in the latency between where you get the data from (HDD) and where you are delivering it to (CPU cache). This means that the consumer (CPU) could read out 1 million bytes in the time it takes for the producer (HDD) to produce one chunk of data. So, if you make those …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

AFAIK, escape sequences are identical in Java and C++ string literals. They both inherited them from C. The way escape sequences work is that there are a number of special sequences, like \n, \r, etc... for things like new lines and carriage return characters, but for everything else, if you have the backslash character followed by anything, it escapes to being just the following character, like \\ -> \, \h -> h, \" -> ", etc..

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Slashes are marking escape characters. When you read a JSON message, you must interpret certain characters as separators. For example, if you have "some phrase", it is seen as the string 'some phrase' (no quotes, because the quotes delimit the string, they are not part of it). What if you want to say "a double-quote " in a string is confusing". This will confuse the JSON interpreter because it cannot know that the double-quote in the middle of the string is part of the string, it will think instead that it marks the end of it. To solve the issue, escape sequences are used to resolve the ambiguity. If you have \", then it means that it is a literal double-quote, not a delimiter for the end of a string. Similarly, if you have \\, it means that it is a literal slash, not the start of an escape sequence. There are several other escape sequence like that.

The complication with things like JSON is that nearly everything needs to be escaped because there are lots of meaningful delimiters and there are nesting of messages. Part of the art of generating and parsing JSON messages is knowing when to escape and how to interpret escaped and non-escaped special characters. AFAIK, XML formats are easier for that because they don't have nested messages.

We can take your message to see what I mean by complications. Starting with this:

{"id":"Nitin1","clientName":"Gourav_first_task","tabs":"[{\"title\":\"Daniweb\",\"icon\":\"icons\",\"urlHistory\":\"[\\\"Dani.com\\\"]\",\"lastUsed\":1234}]"}

we see that the "tabs" value is a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Because I've offended someone who is still in a deep-seated denial of the fact that their opinions are necessarily wrong whenever they are opposite to mine, because I'm always right. ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The vast majority of infrastructure code is in C++, or a C/C++ mix. The back-bone of most high-level language support structures (libraries, compilers, interpreters, virtual machines, etc.) are written in C/C++. 3D computer games are mostly in C++. Operating systems are mostly a mix of C and C++, that includes both the kernel itself and the low-level tools surrounding it. Most of robotics programming is in C++. Lots of server-side stuff, like web-servers and database engines are written in C++. A very large amount of end-user applications (especially the more complex ones) are written in C++ too, I mean things like Photoshop or MS Office. Also, lots of big-data analytics software and other things like that are often in C++ too.

Basically, for anything that is complex enough, C is too crude and minimalistic. And for anything that is critical in terms of robustness and performance, high-level languages (Java, C#, Python, etc.) simply don't match up to C++. And the area that stands at the intersection of those two things is huge, and it's where C++ is used the most.

C++ may not come out of top in terms of total lines of code in the world (out-ranked by C and Java). In part due to fact that well-written C++ is a lot more terse (fewer lines) than other languages, especially C and Java. But mostly the reason is that C is still huge because of its longer history as the number 1 language, and the existence of lots and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think the best is offices for 2-4 people. It's private, yet social. That's my 2 cents.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"Meta" in nature?

"Meta" means to go beyond and/or be self-referential.. Any thread on Daniweb that is about Daniweb is "meta", i.e., like a meta-forum is a forum about forums. Like a meta-study is a study of studies. Like a meta-program is a program that makes a program. This thread is "meta" in nature, because it discusses the workings of the Daniweb site, with a Daniweb thread.

Oh, you're not subtle at all, Mike.

I'm not subtle... for those who understand English. ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think it would be good to promote more intra-community mixing. I mean that there are lots of long-time and/or active members that are confined to only a few specific forums, and don't tend to explore much. They either get the false impression that Daniweb is like the other stricter forums that bash on people who contribute to discussion subjects that aren't their fort. Or they think that they are not interested in other discussions (e.g., like discussions of more social or "meta" in nature), when they might be if they knew more about them.

I'm not sure how to realize such a feature. But it's definitely an aspect that Daniweb be could improved on.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that a reasonable definition of "most powerful" is: omnipotent, omniscient, and omnipresent.

What programming language is effective in all contexts, from small embedded systems to large super-computers? What language is effective at writing both small utility programs (e.g., command-line tools) and complex applications (e.g., compilers, computer games, servers)? What language can effectively express simple programming tasks without too much clutter or overhead, but also complex infrastructure with many layers of abstraction? What language is present everywhere, from core operating system functionality, to forming the foundation of most end-user applications out there?

The answer is obvious: C++

D could be a contender since it was specifically designed to follow most of the principles of C++, and is mostly C++ with a cleaner syntax (especially for compile-time mechanisms). But it has a long way to go in terms of adoption and field testing, because C++ is nearly 30 years ahead on that front.

That's another important thing that people don't consider enough when comparing the "qualities" of different languages. Having several years or decades of collective experience with a language is a huge benefit, and it's something that adds a lot to the "power" of a language: good established practices, a large community of experienced developers, lived through several trial-and-error cycles, a large body of library code, a wealth of development tools, etc... In other words, you cannot just evaluate its "power" by the language rules or design.

Tcll commented: I think you forget C is closer to the level of ASM, and C++ is just as pretty as it is ugly, good post though ;) +3
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Can I ask what char *strerror(num) int num; means?

I had never seen this before either. Since the OP did not report an error besides the sys_nerr-related stuff, I assumed that this line was some sort of very archaic form of C. This was a good guess since this code was clearly written when "strerror" was something new and not widely supported, which means that it's from the 80s or early 90s at best, meaning that this code is probably written in "K&R C", i.e., pre-standard C (1978-1989). As it so happens, I dug around a bit, and it appears that this is indeed the old school K&R C syntax for declaring functions. Even though this is no longer standard (AFAIK), even in C, compilers support it for legacy reasons, in C, but not in C++ (at least, I tested on GCC, and that's how it is, it accepts that code in C but not in C++).

I guess this is what one could call code archeology: digging up some ancient code and re-discovering the bizarre practices of our ancestors.

NathanOliver commented: Thanks +13
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The use of sys_nerr and sys_errlist is deprecated, in favor of the strerror function. The code you posted (which is not complete) appears to attempt to provide a version of strerror function that relies on sys_nerr and sys_errlist for systems that do not provide the more modern strerror function. This fall-back version of strerror should be enabled only if your system does not provide it (which is what the HAVE_STRERROR macro is used to mark). So, you should first make sure that that macro is correctly set for your system, i.e., test whether you have the strerror function or not on your system.

If you don't, then you can update your version of libc (standard C library), and probably the compiler too. This function is a standard C functions since C89, and a standard POSIX function since 1992. Your system should have it, or should be able to be updated to make it have it.

Otherwise, you'll have to replace those lines of code with something else that either (1) achieves the same behavior without using sys_nerr and sys_errlist, or (2) returns a dummy null-string in all cases (not really a good thing, but it might be the only option). To replace sys_nerr and sys_errlist, you will have to look into Solaris documentation, because at this point, you are beyond any established standard (the C / C++ standard is the strerror function; one layer deeper, the POSIX standard is strerror or the older deprecated sys_nerr / sys_errlist; and one layer …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I guess you have to let time tell. By that I mean that you will naturally come to that realization. But in more practical terms, here's a good checklist:

  • People start recognizing your expertise: you are allowed to agree with them, it would be false modesty to do otherwise.
  • You more often encounter code that is "below you" than code that amazes you: you're closer to the "top" than the "bottom" in terms of expertise and code quality or techniques.
  • You can have conversations with people that you hold with high regards, and find yourself comfortable in those discussions: you don't feel out-matched by the "experts".

I consider myself an expert (C++) programmer, because I can tick each of these items. I've accumulated a lot of praise for my skills, knowledge and accomplishments in programming. I no longer really encounter code that baffles me or overwhelms me, i.e., I recognize most of the patterns used (most of which I have used at one point or another) and I rarely see code that I couldn't have produced myself, and I encounter code that I see as very novice or very bad on a regular basis. And there are few people that I look up to still, and have had equal-leveled conversations with some of them. That creates a sum total that makes it clear where I stand.

In other words, the only way to tell is to see what's out there and put yourself out there, and see where you …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You should use the initialization list, which is something like this in general:

Foo::Foo() : member1(/* something */), member2() { };

And so, for a dozen eggs, you should use one of the constructors of vector class, such as this:

Tray::Tray() : dozen(12, Eggs()) { };

where "Eggs()" can be some other constructor call, I just used the default constructor there.

And for the destructor, you don't need anything, because the vector will be automatically destroyed.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It's probably not a good idea to use port 123, because that's the standard port for the network time protocol (NTP). It is likely to be in conflict with it.

When you need to pick an arbitrary port number for something like this, you should always pick something much higher (never below 1500, because most of those are reserved for standard protocols).

A typical port used for ssh, besides the standard 22 port, is the port 22022. In any case, try using a much bigger port number to stay clear of any standard protocol.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

what is it about the LLVM codebase that annoys you?

Here are a few things that I've encountered a lot in LLVM / Clang that irritate me (to varying degrees):

  • ; characters after } only where it's needed.
  • Intentionally using parameter names that conflict with data members (which are invoked with this->), apparently for "clarity" (I tend to prefer some artefact on parameters (usually a "a" prefix), it's the only place I use such Hungarian-style prefixes).
  • No indentation of switch cases (I've seen real bugs created by this confusing style).
  • No const-correctness whatsoever (that's probably the most serious one).
  • False globals (i.e., when you're more concerned about appearing not to be using global variables, you just end up hiding them in convoluted syntactic constructs).
  • Relying on code-generators for things that are trivially accomplished with the pre-processor.
  • 80 character limit, which I think is excessive and antiquated (I prefer 120).
  • CamelCase!
  • .. and no_camel_case...
  • if ( foo ) instead of if( foo ).
  • if ( Foo *foo = get_some_ptr() ) (clever... maybe, but unavoidably inconsistent).
  • Using Foo &foo, because everyone knows that a type modifier should be stuck to the object, not the type.. euh... whatever.
  • I've also spotted some non-capitalized MACROs.
  • Raw pointers everywhere instead of references.
  • Error-codes, no exceptions (because you cannot use references, if you forbid exceptions).
  • No RAII (I guess it's a good thing they don't use exceptions!).
  • Virtually inexistent documentation.
  • Make-up-as-you-go interfaces (tons and tons of half-complete interfaces).
  • .... I guess what sums up …
L7Sqr commented: Good points. I agree with more than not but I have to side with CamelCase and if ( foo ) (I like the separation) +9
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, it's not the Fibonacci sequence at all. I mean, the recursion return Cabin(n/2) + 1; is not at all the Fibonacci recursion. It should be something like return Cabin(n-1) + Cabin(n-2);, but watch out for the termination condition (not to mention that this is not a very efficient recursion).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How many of those emails have you actually received matrixdevuk?

I'm asking because I'm guessing you haven't had that many, maybe only one. The receivers of these emails are selected based on many factors, including posting history to the relevant forum category, and the reputation metrics (rep, up-votes, post quality, etc..). Being a high profile member, and participating to some of the most active forum categories, I imagine I'm among those who get the most of these emails (PMs), and I think I've only received maybe 4 or 5 in total since this feature was put in place, a few months back, if memory serves me right. This amounts maybe to 15-20 emails per year. I wouldn't call that spam.

Maybe this complaint amounts to a storm in a teacup? Just saying.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is really sad... He was such a nice guy and a staple of this community. He had a wealth of knowledge and always a great attitude with everyone. He will be missed very much. RIP Mel.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It seems that your power supply is just a bit too small. You should be able to find a 300W or 350W power supply to replace the one in that computer. Such low-power power supplies are very cheap. You shoud be able to get one under 50$. If you haven't upgraded anything else, then you probably don't need much more than 300W.

Also, if you change your power-supply, you could remove your existing power supply, open it up, and see if anything is burnt. Often, power-supplies have several parallel circuits to provide power, and if one of them breaks (burnt, overloaded, etc..), then the power-supply still works, but it can no longer provide as much power.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I see two potential sources for the problem.

First, it's possible that your BIOS does not support a hard-drive of that size. As far as I know, a classic BIOS / MBR setup of that generation of computer (pre-UEFI) has a limit of 2.19TB for the hard-drive from which you want to boot. So, that should be enough to allow you to use your 2TB HDD, but because you are near that limit, it's possible that this is where the problem comes from (maybe your bios is even more restrictive in size). But the good news about this is that this is only a limitation on the HDD from which you boot the OS, so, you can always use a smaller HDD for the OS, and use the bigger HDD for additional storage.

Second, it's possible that your power supply is too weak to accomodate the peak power consumption of your HDD. Hard drives typically don't consume much power compared to other components (CPU, Graphics card, etc.), but their peak power requirements can be quite high (because of the electric motor rev'ing up). A typical 2TB hard-drive will consume between 5-10W most of the time, but during a start-up (or rev-up) it can momentarily consume 50-100W (rule of thumb is, you multiply the idle power by 10). I looked up the specs of the Dell Optiflex 760, and it has a power supply between 250W and 300W (depending on the exact model), which is quite weak (but sufficient in that …

mouaadable commented: thank's +1
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Obviously, the first step is to multiply the number of cores by the frequency, e.g., 8 cores x 3 GHz = 24GHz and 6 cores x 4 GHz = 24GHz. That gives you a rough approximation of how the two CPUs compare to each other in terms of total number of instructions per second.

Then, you generally give a little bonus to the CPU with the higher number of cores because it will have more overall L1 cache (because it's per-core cache). And also because it has less congestion (too many programs / threads executing on the same core) than the one with fewer cores.

By those two very crude measures, the CPU_1 would be the better one.

Then, you have to check which generation of CPU it is. In recent years, most of the innovation in CPU technology has gone into various sophisticated optimizations in the technology, rather than just "more cores, more cache, higher freq.". Things like pre-fetching, branch prediction, concurrent instructions and out-of-order execution, and so on, are all things that have been improving a lot and affect performance a lot. Because the reality is that even if a CPU can execute a huge number of instructions per second, most programs spend most of their cpu-bound execution-time suspending the instruction pipeline for things like cache misses and flushing or rolling-back instructions after mispredicted branches. So, these kinds of innovations in CPU tech. are very important and are much harder to measure or assess. So, at least, you …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It is undefined behavior because the compiler is not required to perform the increments in any specific order.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, according to the scores on cpu benchmarks, we have this:

CPU                              -  Score   -  Price
Intel Xeon E5-2697 v2 @ 2.70GHz  -  17,516  -  $2,579.99
Intel Core i7-4960X @ 3.60GHz    -  14,022  -  $1,048.99

So, that should answer the question on which on is the best, at least as far there could ever be a single number to represent that (which is a big assumption, since it can depend a lot on the typical use and the rest of the computer, especially RAM). But, you should also keep in mind the price difference too. Is a 25% increase in performance worth a 250% increase in price?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you want to be sure to catch the point at which the upper and lower bounds are equal, then you have to check that condition every time you modify one or the other, and you have to make sure you don't modify both at the same time.

This is an interesting case because it does not have any shared data (each thread modifies its own data) but it has a shared condition (for termination) which is essentially the same as having shared data. That's probably the lesson that this exercise is supposed to teach you.

Once you realize that your upper and lower bounds are shared (through the condition) between the two threads, then it becomes a simple synchronization problem that is easily solved with a mutex.

You can represent your threads with boost::thread and the mutex with boost::mutex (which is locked with boost::unique_lock<boost::mutex>). There are plenty of examples and tutorials on using those. You can also follow the standard library examples, because the standard C++11 threads are virtually identical to boost-thread.

But if you are having trouble writing that code, just show us what you have so far and we can help you along. But we can't just give you ready-made code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How should you deal with copying and destroying FilmList objects?

The answer that this question is begging for is "deep copy". You can read more about it in my tutorial.

even I test it with a simpler replica of this design shown that the pointer in the original one is not deleted. Tested by printing out the current value of the pointer

When pointers are "deleted", what it really means is that the memory they point to is released, i.e., it is made available again for future objects or arrays being allocated dynamically. Deleting a pointer does not change its value (it still points to the same place) and it generally does not change the memory it points to either (i.e., if you print the content of the memory immediately after deleting it, it is probably the same). The only thing that changes is that later on, the same memory could be allocated to other things.

So, once you've deleted a pointer, you no longer have the right to access that memory. It is no longer yours.

So, if you simply copy the container of pointers, and delete all the pointers of the original one, then your copy will have to bunch of pointers to forbidden memory, and anything you do with them is illegal / ill-defined / undefined behavior / really really bad!!!! In other words, that copied container is complete garbage.

So what does the original question even mean?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If they're gonna go for the year that corresponds to the technology they put out, then I guess that 10, as in 2010, is about right... or maybe a little presumptuous.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to also add boost-system (libboost_system.a or libboost_system.dylib). And make sure to add it after boost-thread in the list of libraries to link with. This is just because boost-thread depends on boost-system, so both need to be linked, in that order.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It depends on what level you want to code at.

For graphic user interface (GUI), I recommend Qt. There are plenty of tutorials out there (and it's cross-platform too).

For 3D graphics, I would recommend Coin3D (an open-source implementation of OpenInventor, which is a high-level OOP API on top of OpenGL, with nice tutorials). You can also integrate it to Qt using SoQt. You could also use OpenGL directly, but that can be a bit complicated.

For a more complete 3D game, you can use Ogre3D.

As far as I know, these are all the most popular graphics libraries in Linux. And they're in C++, of course, that goes without saying.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, the test command should be:

$ env x='() { :;}; echo vulnerable' bash -c "if [ $? -ne 0 ] ; then echo \"start patching now\"; fi"

I tried it and it printed "vulnerable" for me, but I checked my updates and bash got updated to 4.3 and now the test command no longer shows it to be vulnerable. Yay!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I totally understand your frustration. I've had my share of problems with Windows Update. I think that the update system works under the assumption that you had Windows pre-installed with your computer and that you always use Windows and therefore, never miss their monthly updates. But for people like me, who can count on their fingers of one hand the number of times they booted up into Windows in the past year, that assumption breaks down.

Every time I boot up into Windows (most often because someone forces me to use PowerPoint for a presentation, instead of my preferred option: LaTeX/Beamer) I have several months worth of updates to get through (often requiring several reboots!!!) and every other time, they fail somehow and force me to do all sorts of steps to repair it. They simply did not design the system to be able to work unless you install all the updates as they are putting them out, i.e., meaning you need to always be using Windows to catch them all, and you can't really do a re-install from an old copy (not up-to-date) of Windows' installation CD/DVD/whatever.

And yeah, Windows' restore system is a joke! And as a customer, the joke's on you!

And I totally agree with JorgeM. If you need Windows for something, put it inside a virtual machine. It's preferrable to not allow it unfettered access to your system, Windows doesn't deserve that privileged position!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Those two functions essentially do the same thing (read a line from cin). The difference between the two is that the member function (cin.getline(..)) takes an argument (the string to be filled in with the content of the line) as a char*, which is the C-style representation for strings (a pointer to an array of characters, terminated by a null-character), or "C-style string" for short. The non-member function version (getline(cin,..)) takes an argument as a std::string object (by reference), which is the standard (modern) C++ representation for strings.

The reason why the std::string version of getline is not a member function of the std::istream class (which is the class of cin) is for a historical reason. Basically, in the early days of C++, as people were designing the basic libraries that became the standard libraries, the IO stream libraries (to which std::istream belongs) were created before the string library was created (or made "standard"). Therefore, the getline function was a member of cin and relied on the more basic C-style string representation. When they created the C++ string library, they provided, as part of the string library (in the <string> header), a getline function to do the same as the other one, but with a std::string object instead of a char pointer.

Things were never changed after that, because there is no particular reason to change what's not broken. And it's not like it's easy to change things in the standard library of one of the most widely used language …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Elementary os = Linux version of Mac :)

What?

ElementaryOS is a distribution of Linux, based on Ubuntu and GNOME, which pretty much copied all the GUI elements of Mac OS X. See for yourself.

He went with text starting as "Wow, Linux looks like Mac, crazy!",

I find that most "layman" people's reaction to Linux is some sort of ambiguous look, followed by "Are you running Mac OS on a PC?". This is mainly because for most people anything that isn't Windows looks like Mac.

I went to see his Macintosh, and it did indeed looked awfully like Unity and GNOME.

Yeah, that is true. Unity really incorporated a lot of the typical Mac elements. It has a top menu bar that changes depending on the application window that is in focus, which is something that Mac OS introduced. It has a side bar with applications large icons only (instead of the bottom bar with small icons and text, as in traditional Windows), which is also inspired by Mac (although many Linux distributions had this already for a long time). The software center of Ubuntu probably pre-dates any kind of AppStore from Apple, i.e., this is an element that was taken from the Linux world into the Mac world.

I think that the main thing to remember is that Apple is known for being innovative when it comes to such things, and for being quite successful at coming up …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You certainly deserve a lot of praise for this piece of code! We rarely see beginners producing code at that level of "good practices" of programming.

Is my code well-written and readable?

Yes, it's well-written and readable. You get an A+ here. I really can't find anything to critique in that aspect of your code. It's well spaced out, consistently styled, the names are well chosen and meaningful, and you have a healthy amount of comments in your code (without being excessive).

In split.h, should splitBySpace and splitBySep be part of two different classes, or is the fact that they both have similar functions enough to keep them in the same class?

They are not part of the same class, they are part of the same header / source. This is totally fine. Headers and source files are meant to regroup things that are very much related or very similar. The most important thing is that logical separation. After that, you can be concerned about size (not making them too large), but that's not a problem for you now ("large" starts at maybe one thousand of lines of code). So, the "one class one header" rule is only a rule of thumb to help you achieve the other two objectives (logically related, and not too big).

For the most part I have been using include directives in the header files. If the header file has the include directive, then I have been opting to omit …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Those queries would go in the Mobile Development forum.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why will I need to update each and every source file?

If you only change the definitions (implementations) of the functions, but leave all the declarations (or the function names and signatures) intact, then you won't have to change anything in the rest of the source code. However, and this is a big "however", you will have to recompile every single source file you have. This is really the core of the issue and the main reason for the "separate compilation model" (which is the technical name for this header / source paradigm with each source file being compiled separately and then linked together).

If all your implementation code is in the header files, you create several serious scalability problems for projects using that code. Here are a few points:

1) If headers only contain declarations, then they only need to include other headers in order to create well-defined declarations (function signatures). But if headers also contain the definitions, then they also need to include every header needed for those definitions, which is usually significantly more than what is needed for the declarations alone.

2) If all headers contain all their definitions, then compiling a single source file means that all the code of all the functions must be compiled for that translation unit. So, if any two source files or programs include the same headers, the code in those headers will be compiled twice, which is just a waste of time. If definitions are in source files, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You always know the characters are going to be okay, because there has got to be another episode with that character in.

Clearly, you haven't been watching Game of Thrones. ;)

Have you noticed this pattern in every episode of any tv show

Of course, that's just the classic five acts dramatic structure. It's an extremely common and dominant pattern for story-telling (in any medium). So, don't be surprised to see that particular structure, because it's literally everywhere.

The other main dramatic structure is the "epic", which are things like Lord of the Rings and A Song of Ice and Fire. The way to recognize this structure is by the "cliff hangers", with the very obvious purpose of making you want to know what's going to happen next to your favorite hero(es) or how they will get out of their predicaments. People generally like epics, but they are very difficult to pull off, which is why many of them gain instant popularity but can easily fall flat (e.g., Lost, Heroes, True Blood, etc.). That's because the long-term arcs of the story and of the characters is very important and hard to craft well, and if it's missing, the whole thing just turns into melodrama (i.e., "soap opera") where the overall story and characters are essentially static with lots of rises and falls but no net or meaningful movement.

That's one of the reasons I like anthology shows.

Oh yeah, I love those, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hitting CTRL-S after every sentence I type, even in places like this forum editor where it's obviously not needed (in fact, I had to request to Dani to add code in the editor to block the CTRL-S action from triggering the "save webpage" behavior of the browser).

I also refresh stuff every 5min.

I'm not a gamer, so my fingers don't linger on the AWD keys, but as a coder, with a french-canadian keyboard, my right hand's natural resting position is with the thumb on the ALT key, my pinky on the shift key, and the ring finger where the characters {}<>[] are (because on french-canadian keyboards, I need the ALT or shift for making does characters, which you need all the time when coding, especially in C++). Whenever I set my hands on the keyboard, that's where my right hand falls, invariably.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

do you think its important to put virtual keyword in class B and C?

It's not necessary, but it is good practice to do so, just to be explicit about it. Sometimes the code for the base class and the code for the derived class are far appart (different files, different folders, different libraries, etc.) and so, it's good to have this reminder in the derived class too. In fact, C++11 introduced a new keyword called override to use for this purpose (and it also triggers an error if the base-class function is not virtual, which is a very nice bonus) (read more).

What is Polymorphism in C++?

Polymorphism is when a single entity in the code (like a variable or parameter) can take any (or many) forms that lead to different behaviors. Dynamic polymorphism is achieved essentially as uonsin showed, by having virtual functions in a base class, with different implementations of them in the different derived classes. By creating an object of a derived class, and then passing it off as a base-class object (by pointer or by reference), the code that uses that base-class object can be written once for that base-class, but in actuality, it behaves differently for each derived class that is used. This is one of the fundamental benefits of polymorphism: write code once for some abstract interface, and reuse it for every derived class that exists or will ever exist.

There is also static polymorphism, which is …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Your code is full of non-standard C++ code. And then, there is a typo with "\t"<program which should be "\t" << program. Correcting those basic problems, we get this:

// NOTE: standard C++ headers don't have the .h ending:
#include <iostream>
#include <fstream>

// NOTE: standard C++ components are in the 'std' namespace:
using namespace std;

// NOTE: the main function must have 'int' as return value:
int main()
{
   char name[30];
   char rollno[20];
   char program[20];
   ifstream myfile;                         // handle to file.
   // NOTE: the filename is a 'const' char array
   const char inputfilename[]="studentinfo";
   myfile.open(inputfilename);
   if(!myfile)
   {
      // NOTE: it's better style to leave spaces between operators:
      cout << "file " << inputfilename << " cant open" << endl;
      return 1; //<- NOTE: return 1 from main instead of 'exit(1)'
   }         
   while(!myfile.eof())
   {
      myfile >> name >> rollno >> program; 
      cout << name << "\t" << rollno << "\t" << program << endl;
   }
   myfile.close();
   // NOTE: don't use 'system("pause")'
   return 0; //<- NOTE: return 0 from main means 'everything went OK'
}

The above works fine, but the use of char arrays in this way is not recommended because operations like myfile >> name are not safe. For example, if the user enters a name that is longer than 29 characters, you will have a buffer overflow (writing passed the end of the char array). There are ways to deal with char arrays safely, but they are just more trouble for no reason. You should use the …

Slavi commented: Great read as always .. +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Normally, if bad-alloc is thrown, it indicates that you are trying to allocate more memory than you possibly could, given the amount of memory available on your system (RAM + virtual memory). Make sure you are not trying to allocate such a crazy amount of memory. And if this is the reason for the problem, then any alternative to malloc is not going to help you, because you will still run out of memory, whichever way you try to allocate it (from RAM).

Another possible, and much worse, reason for bad-alloc to get thrown is if you have corrupted your heap. This can happen when you delete the same memory twice, or if you ask one heap to delete memory that was allocated by another (e.g., allocating in a DLL and freeing in the EXE). That is a much more serious problem that requires significant investigation or memory debugging to find.

As far as alternatives go, the main alternative to malloc on Windows is to use the HeapAlloc function which is like malloc but allows you to specify which heap to use, and you can create a new heap object with HeapCreate. But that will not really solve the root problem that is causing your bad-alloc exception (whichever is the reason for it). The implementation of malloc on Windows probably uses HeapAlloc under-the-hood anyways (or GlobalAlloc or LocalAlloc, which are just shorthands for HeapAlloc with the default heap object, the one you get from the GetProcessHeap …

マーズ maazu commented: helpful, will remember those links +0