mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What does the warning integer overflow in expression mean?

It means that the operation goes outside of the valid range of the particular type of integer. For example, if you have an integer of type unsigned char, then the range is 0..255 which means that doing 2 - 4 is an "overflow" because it goes negative, and, of course, 253 + 5 is also an "overflow" because it goes beyond the max of 255.

Why am I getting it?

For one, the &type - 1 operation could overflow by making the pointer "negative". Pointers are, in general, just unsigned integers that are interpreted as address values, and therefore, producing a negative pointer is an overflow error.

Another issue is that if you are in a 64bit platform, then pointers are 8 bytes long, while integers (int) are typically 4 bytes long. I think that by default, in C, the operations are done in the default integer type int unless it is explicitely made otherwise. But I could be mistaken, I'm not super well versed in the soft typing rules of C. But when I generate the assembly listing for your program (after a few tricks to prevent optimizations), this is confirmed by the following assembly code:

leaq -8(%rbp), %rcx    // &x -> rcx (64bit reg.)
movq %rcx, %rsi        // rcx -> rsi (save it)
addq $-4, %rcx         // substracts 4 from &x
subq %rcx, %rsi        // does (&x - (&x - 1))
movl %esi, %r8d …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You can use the tellg and seekg functions to re-establish the cursor (get pointer). As so:

std::streampos p = filestream.tellg();
stringstrm << filestream.rdbuf(); 
filestream.seekg(p);
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with Hiroshe about UML diagrams. I have rarely found them to be useful. I think that they are simply too detailed. If you have to make a diagram that has all your classes, with all of their data members and methods (functions), you are basically doubling your workload (because it takes about as much time making that diagram as it takes to write the code), and you don't really get that "big picture" benefit because there are so many details in a UML diagram that anything that is non-trivial will end up as a huge UML mess of details and connections. I see UML diagrams more as a way to graphically document an existing design, not as a tool for designing it. But I sometimes draw higher level diagrams, with none of those unecessary details, only to map out the building-blocks of the software and their inter-dependencies (very important!).

I also don't see UML diagrams being used that much as a form of communication about software design ideas either. All seasoned programmers know code very well, and therefore, it seems like a much more natural language of communication than UML diagrams. What I tend to see the most is people expressing their design ideas in the form of snippets of code (often just stubs) that illustrate how it should work ("use-cases") or how it could be done (e.g., like the skeleton of an implementation, maybe some pseudo-code in there). And I think that this seems to be a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I got a Nexus 5 a few months ago. I love it, it works great and it's fast and crisp and all that.. no complaints. Also, the wireless charger is really nice to have.

I guess the SG S5 has higher specs but like Agilemind said, that's pretty much plateau'd at this point. And I don't know how it is where you live, but here, you can buy two Nexus 5's for the price of one unlocked SG S5. I just can't see how the slightly higher specs can justify that.

PS simLocking is just another way for operators to 'subsidize' phone prices by having their profit come from the monthly plans

Also, don't forget those "deals" where if you enter a new contract you can get the phone almost free.. but of course, later on, when you want a new phone you can't get out of the contract and you now have to pay full price (or more) for the new phone.

I hate sim-locking. My first phone was sim-locked. Then I realized what that meant. I never bought a sim-locked phone again, and I support Google's move to undermine that whole scheme by putting out reasonably priced unlocked phones.

Part of the reason phones are so expensive to buy unlocked is because they want to steer people into these locked-in contracts instead, so they can squeeze more out of you. Basically, companies would look bad if they simply said that they don't sell unlocked phones, it …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hi all,

I was just playing around with some ideas about optimizing a core piece of my library: the run-time type identification system (RTTI). What does that have to do with compile-time string concatenation? Well, the whole point of the RTTI is to provide information on the type of objects, and a string representation of that name is a pretty central part of that (as well as a hash value). Type names are made of identifiers, which are names that are known at compile-time, and could thus be compile-time string literals (that means things like "Hello world!"). The problem is, for complex types, like templates, many compile-time strings need to be assembled or concatenated to form the complete type names. At that point, I had to switch over to run-time code (e.g., using std::string or C-style functions) to form those compound strings. Why? Because there is no way to concatenate strings at compile-time, or so I thought...

What makes this difficult is that normally, when you concatenate strings, you just create a new chunk of memory big enough for the two original strings and then copy the two strings one after the other. Easy right? Well, there are two problems. First, you cannot "create a chunk of memory" at compile-time because there is no concept of memory at this point. Second, you cannot copy data because that would imply changing things, and you cannot change compile-time constants. So, clearly, the traditional method won't work.

Welcome to the world of pure …

ddanbe commented: ++ ! +15
Hiroshe commented: Excellent! +8
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'd finally be able to use ctrl-c and ctrl-v

But I would expect that ctrl-C would be sending a kill signal to the command currently running. Most command-line interfaces that I know of use ctrl-C for kill and ctrl-Z for hard kill of the programs. It would be weird if they changed that. Usually, the copy-paste can be done with ctrl-shift-C/V.

Here it is ;)

That's just the GNU coreutils library. This is a tiny part of the base MinGW MSYS environment. And the coreutils package is lacking some pretty critical tools, like bash, grep, find, awk/gawk, tar, gzip, etc... You should just go with MSYS instead to have a more complete environment. And as far as I know, when using coreutils in a cmd.exe session, you don't get the Bash features like redirections and pipes. Not having pipes makes the whole thing kind of pointless, no? At least, with MSYS, you get Bash, and thus, pipes, redirections, and the whole bash language for bash scripts... now we are starting to have something respectable as a command-line environment.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Are you talking about Cygwin? The Unix emulator for windows, which has, like Linux, all the unix commands plus most of the GNU programs (and others) that can be installed via its package manager. Cygwin is one of the first program I install on any Windows machine that I'm gonna be using.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

is adding m_y, m_m, m_d, considered as cheating?

Nothing is considered "cheating", but there are trade-offs. Keeping the year / month / day values within the class will add to the memory used by the objects (instead of just one number for the days since some fixed date in the past (called "epoch")), but you can, technically, respond faster to requests (calls) for the year / month / day (or some derivation of that). However, you also have to understand that this will not allow you to get away with not having to write code for converting between the "day-since-epoch" value and the "year-month-day" values. Whichever way you do it, you still need to provide conversions back and forth. If you store only YMD, then you need to convert when getting or setting the date as day-since-epoch. And vice versa if you only store the day-since-epoch. And if you store both, then you have to guarantee that they are always referring to the same date, meaning that you will have to do conversions all the time, e.g., if you set the day since epoch, then you have to update YMD data to match it, and the same goes for every other function.

So, you will have to write the conversion code. As far as how to do it, well, you have to handle the leap-years and the various number of days in each month, but it's a fairly straight-forward thing to do. If you have problem writing that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

MACROs are a kind of find-and-replace feature with some additional useful benefits. First, they can take parameters which will be replaced inside the MACRO's body. But be warned, they should not be used as substitute for functions, i.e., if you can do it with a function, use a function because MACROs can be dangerous in the sense that they can introduce weird bugs. Function-like MACROs can be useful for generating code that could not be generated otherwise (via templates or functions). Additionally, MACROs can be used in simple pre-processor logic statements (#if, #ifdef, etc..) which can allow you to turn on and off certain sections of code depending on the value (or existence) of certain MACRO symbols.

Asserts are used for checking conditions that should never occur unless there is a bug in the code, i.e., they're sanity checks. The point is that there are a number of checks, such as range checks, that don't need to done when you have completely bug-free code. But until you know for sure that you don't have bugs, you need to do some debugging and quality assurance (unit-tests, etc..). During that debugging phase, it's useful to perform a number of checks everywhere to catch as many bugs as possible, but one you are done debugging, you don't want to have to go through the code and remove all those checks. So, asserts are checks that are only enabled when the code is compiled in "debug" mode, and get removed by the compiler when …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't think that constexpr is used that much yet. And as far as standard libraries or Boost libraries, they will only use constexpr if it is supported by the compiler they are being used with. Such as the macros used by Boost.

Although it can probably be used in interesting ways, constexpr is mostly a syntactic sugar (for when you need a compile-time constant value, it avoids requiring C++03 template meta-programming) or about enabling some minor optimizations (pre-computing some expressions at compile-time). In other words, it is not really a necessary feature nor something that cannot be worked around when necessary. The point is, there probably aren't any important libraries for which constexpr is an absolute necessity, at least, not yet.

should i use GCC instead of VS?

I would say yes, but that's my opinion (I'm pretty sure vijayan121 would disagree). I think MSVC is a terrible compiler, period. Any other option is better. Also, if you feel adventurous, you could try the brand new, fresh out of the oven, Windows version and/or MSVC-compatible version of Clang.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Is documentation systems something like Java libraries?

No. A documentation system, which is usually called a documentation generator, is a tool (program / application) that can be used to generate documentation about your code directly from your code (provided that you have some annotations in it). Notable examples are Javadoc (Java), Doxygen (C / C++ / Java / C# / Fortran / ..), EpyDoc / Sphinx (Python), and others. I would say that for most projects they are necessary but not sufficient. You just write code as usual, and you add comment blocks before functions and classes to describe what they are, what they do, and what parameters they take, all using the standard tags (like @param, @author, etc.). The documentation generator will look through all your source code, pick out those functions and classes along with their associated comment block, it will parse the tags and generate a nice-looking, searchable documentation file (various formats, like HTML, help files, pdf, etc.) with all of that. It's a nice way to have an up-to-date and complete API documentation of all your classes and functions, but in general, it is not sufficient because you need more of a "narrative" to make the documentation useful for outsiders (who want to use your code), such as tutorials, overviews, examples, etc.. But that generated API documentation is a must-have.

Do you know how to use revision control?

A revision control, or version control system, is …

<M/> commented: How.... how do you write so much on every post (in a good way)! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In my lab, we have several computers that run Windows XP. They are all machines that are tied to a particular piece of hardware that they run, and we cannot upgrade the software to run that specialized hardware, and therefore, we can't upgrade the OS. But my university put in a policy, as soon as XP was decommissioned that they would no longer allow any XP box to connect to the internet. So, all of our XP computers now run on closed networks, or I've set up firewalls to deny them outside access when connected to a network that could reach the internet. This is really the most that I can do, since upgrading is not an option.

I would imagine that many businesses are in a similar situation with some computers (maybe even most computers). So, maybe that statistic is a bit inflated by those companies that "still run machines on Windows XP" but keep them in closed networks. People have a tendency to assume that all PCs / computers are desktop computers that an employee is using to check his emails. Under my desk, I have 3 computers (two on XP, one on QNX, all about 10 years old) that only run hardware, no internet connection. In an engineering place I used to work at, they had several computers dedicated to simulation tasks, again, running old systems and on a closed network. Have you ever seen a modern manufacturing plant? There are desktop computers everywhere, mostly sitting in …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

From what the guys over at Clang and GCC have said, the constexpr feature is actually one of the hardest features to implement from C++11. I think it's because it actually constitutes (another) compile-time Turing complete language on top of C++.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Never heard of an FPAA can you explain.

I had never heard of FPAA either, until I did a little looking up of some practical ways to implement analog computations. From what I gather, they can be used like FPGAs, but instead of programming digital logic gates, you program analog circuits. It seems like a nice way to make an array of analog computers in a small and efficient package. I can certainly imagine that there would be a lot of noise generated by all those transistors being packed on a single IC. But for an application like this (modeling neurons) the noise is probably inconsequential. If you were designing analog signal processing (which is generally aimed at removing noise or demodulating signals), FPAAs would not be suitable due to the noise level. But with neuron emulation, it is a kind of pseudo-digital system (due to the thresholded firings), meaning that the noise is not going to be a factor.

The key thing to remember with "the brain" is that it is a big approximation calculator. In other words, everything approximate, signals are fuzzy, and results are probabilistic. So, a digital logic system (like FPGAs or CPUs), whose main characteristic is the ability to make exact calculations, is an odd fit in the midst of it all.

Can we assume that modeling the neuron more closely and more accurately will make any difference to our understanding of intelligence?

Does your understanding of transistors help you understand …

TrustyTony commented: Agree to change title to better describe content +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

By the way, if you want to install software updates through the command-line, you can do this:

$ sudo apt-get update
$ sudo apt-get upgrade

And if there are kernel updates, do this:

$ sudo apt-get dist-upgrade

Just wanted to point that out, in case you didn't know.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The reason for the limited amount of "peeking" is because certain types of streams cannot accomodate more than a single character peeking (and no seeking). But for the streams that can easily support moving back-and-forth (seeking) in the underlying data, there is no reason to use peek() instead of the tellg / seekg method. For instance, file-streams can accomodate seeking around the data, although it's preferrable to read/write sequentially or seek as little as possible. In the case of string-streams, they support seeking around the data with ease, and there is essentially no significant cost to seeking back-and-forth anywhere in the data.

So, don't use "peek", use the tellg/seekg method, as so, for example:

std::string peek_next_word(std::istream& in) {
  std::string result;
  std::streampos p_orig = in.tellg();
  in >> result;
  in.seekg(p_orig);
  return result;
};

Basically, any kind of implementation of a multi-character peeking function would have to use a mechanism similar to the above code (tellg / read / seekg) and that's why there is no reason to have a separate function (e.g., "peek_n(str,n);") in the standard streams to do this, because it's not gonna be more efficient than this tellg-seekg method.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A deduced type traditionally means a template argument type. Like this:

template <typename T>
void foo(T&& t) { /* .. */ };

int i = 0;
foo(i);  // type T will be deduced from this call, from the type of 'i'

However, with the new standard also comes the keyword "auto" which is another form of deduced type, such as this:

int i = 0;

auto&& t = i;  // type 'auto' will be deduced from the type of 'i'

Universal references simply means that when you have a situation such as above (deduced template argument or "auto") followed by double ampersands, then the overall type of the "t" in the above will be whatever is the most appropriate reference type, i.e., const or non-const, lvalue or rvalue reference, depending on whether the variable initializing the reference (incl. passed to the function template) is an lvalue (named variable) or rvalue (temporary value, or moved-from lvalue) and whether it is const or not.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For help viewers, you should use Qt Assistant help-files, which are the Qt equivalent of Windows help files / menus. The Qt Assistant framework has all the features you are describing, and the help files are easily generated from html pages. With that, it's just a matter of launching it upon the OnClick signal of a button (don't use "QDialogButton", that's for buttons that are inside a QDialog, such as OK/Cancel buttons). You can check out this tutorial about integrating Qt Assistant in a Qt application.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Learning occurs by adjusting the weight matrix in a neuron

Care to explain how? The logic required to update the weights is generally-speaking exponentially higher in computational cost than the actual neuron firing logic / dynamics, which is quite trivial to compute, even on an ordinary PC. One of the big problems with ANNs (Artificial Neural Networks, which is what you are describing, and btw, you should not use the general term AI as being synonymous with ANNs, because ANNs are a very tiny subset of AI) is the fact that algorithms to learn the weights of an ANNs are either (1) computationally intractable, (2) highly undercontrained, or (3) very crude and thus requiring a life-time of learning (which is the way animal brains learn).

The forward-eular trick can be used to approxiate exponential decay thereby reducing the calculation computations.

It seems to me like the whole thing could be done with an analog computer. This whole thing, in fact, looks like the dynamics of a fairly ordinary switched-capacitor circuit. I would imagine that with a few op-amps and transistors, this could be realized quite easily. If you need fast and instantaneous simulation of some dynamical system, analog computers are your friend, they've been used quite a bit in robotics to do some of the heavy calculations instantaneously. In fact, why not try this with a FPAA?

Also, if you are going to using forward-Euler, you might as well just stop thinking …

iamthwee commented: nice +14
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This isn't really a problem of templates, it's just the problem that you don't have a function named maximum that could take 3 values. Your maximum function template can only take a single value (not to mention that that function is not going to work at anything, because it has obvious errors in it).

If you need a maximum function that takes three values, you need to define one that takes three values:

template <typename T>
T maximum(T v1, T v2, T v3) {
  //.. code here..
};
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The fact that most Python libraries are written either in C or C++ (or Fortran for NumPy / SciPy / ..) should tell you something about the limitations of Python as a system language. Thinking that Python is appropriate as a system programming language is missing the point of Python altogether, it's not understanding its strengths.

Having been exposed to some of the details of how operating system kernels and drivers are written, I can tell you that this can definitely not be done in a high-level language like Python. And even if you really wanted to, you would have to strip away many aspects of Python to the point that it wouldn't really be recognizable as such, it would become essentially a C-like language with Python syntax. And what would be the point of that? We have C already which has a syntax everyone is familiar with. I could imagine a stricter and more feature-rich language being used instead of C, maybe something like Embedded C++, but that's about as far as I think it could even go away from a low-level C-like language.

There are a number of things that are very common in kernel programming, which are, for the most part, impossible in Python (AFAIK). For one, having direct memory control and addressing. Remember, much of kernel code runs without virtual addressing (directly accessing physical memory addresses), and thus, need to manage and map their memory space precisely, byte-per-byte. This rules out any language that does not …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The safer way to input an integer from cin is to use the following:

if(!(cin >> choice))
  cin.clear();
cin.ignore();

What it does is this. If the input operation failed (e.g., you did not enter a valid integer value), then it will put the stream in a failed state, which is what is tested with the if-statement here, to be interpreted as "if ( NOT ( stream in good state ) )". Then, it calls "clear()" on cin to clear the error state to make the stream readable again. Finally, in any case, it calls "ignore()" on cin such that any other thing that was inputted after the number itself is ignored, such that the next operation on the stream is not going to pick up those stray characters.

Without this, as you currently have it, if someone enters a non-numeric value, it reads nothing and puts the stream in an error state (std::ios_base::failbit) and all subsequent operations on it will have a similar effect, i.e., they will not read anything and just leave the stream in an error state. That's why you get into an infinite loop.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just so you know, the proper C++ version of that (mostly C-style) code is this:

#include<iostream>
using namespace std;

int main(void)
{
  int p = 0;
  cin >> p;
  cout << p << endl;
  return 0;
}

That readInt function just reads an integer for the standard input stream, and it works like this:

int readInt()
{
    // <-- skip all non-numeric characters --
    int cc = cin.get();
    while(cc < '0' || cc > '9')
        cc = cin.get();
    // -- end -->
    // <-- read each digit of the number --
    int ret = 0;
    while(cc >= '0' && cc <= '9')
    {
        // take the current number, multiply by 10, and add the digit.
        ret = ret * 10 + ( cc - '0' );
        // read the next digit
        cc  = cin.get();
    }
    // -- end -->
    return ret;
}

That's about as much as it could be spelled out. I replaced the getc(stdin) with cin.get(), just because that's more in C++ style (cin is the C++ input stream, and stdin and its related functions is part of the old C legacy that C++ carries around, which is usually to be avoided except in special cases). Also, the code should not include the <stdlib.h> header, for two reasons: (1) because it's not a standard C++ header (it's a C header) and the C++ equivalent for it is <cstdlib>, and (2) because it's not the correct header for what is needed there, which is stdin

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm not a big fan of using typedefs to hide a pointer type. I just adds confusion and usually doesn't make the syntax simpler, often the opposite (int* becomes IntPtr or something like that). Even when using smart-pointers (as one should!) like std::unique_ptr or std::shared_ptr, you still need to make it clear in the name of the typedef what kind of smart-pointer it is, like std::unique_ptr<int> becomes IntUniquePtr, which is again not a huge gain in terms of syntax, and it often still leads to some confusion or some non-idiomatic code.

Other practical reasons people use typedefs are:

Reduce large template types and nested types to something smaller for within the body of a function, like so:

 typedef typename std::vector<T>::iterator Iter;
 for(Iter it = v.begin(), it_end = v.end(); it != it_end; ++it)
    ...

Naming the actual type only in a single place, instead of everywhere where it is used. Like this:

template <typename T>
class vector {
  public:
    typedef std::size_t size_type;

    size_type size() const;
    size_type capacity() const;

    void resize(size_type new_sz);
    ...
    // If std::size_t was used everywhere, then changing it would mean
    // you would have to change it everywhere. But now, you only have 
    // to change the typedef.
    // Note that the same is true for within a large function.
};

Hide away template types to make them appear as simple types, like this example from the standard (the std::string class):

namespace std {

  typedef basic_string<char, char_traits<char>, allocator<char>> string;

};

And, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, the size_type of any container (or std::string) is just the preferred type for indices and sizes related to that container. It is usually fine to use another integer type, such as int. There are a few dangers associated with not using size_type but they are very unlikely to happen in nearly all circumstances.

For example, if you do this for-loop:

for(int i = 0; i < str.size(); ++i)
  ...

The type of size() is size_type, and if that type is larger than int, then the value returned by str.size() could larger than what can be represented by the int type. This will lead to an infinite loop because when i reaches the maximum of its range, it will wrap around to the minimum of its range (large negative number) and the whole thing will never stop because i can never reach a value larger than size().

Using size_type guarantees that you won't get those kinds of problems, but obviously, you can see that such problems won't occur when the sizes or indices are small. So, just consider using size_type as the "playing it safe" option, which is what you would do when writing industrial-strength code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that moderators should have access to the editorial articles before they are published and there should be some review process that allows the author or Davey/Dani to ask specific moderators to take a look at the article before it gets approved.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is the difference betwen high level languages and low level languages?

Here is a bash script to answer your question:

#!/bin/bash
echo "high level languages" | sed -e 's/\ /\n/g' > hl.txt
echo "low level languages" | sed -e 's/\ /\n/g' > ll.txt
diff hl.txt ll.txt
rm hl.txt ll.txt

Run this and you will know exactly what is the difference between "high level languages" and "low level languages".

If the answer does not satisfy you, then there is always google.

What does a high level language do that a low level language cant?

That question is inverted. Low level languages can do a lot of things that high level languages cannot do, not the other way around. High level languages allow you to easily do the things that you usually do often and/or produce complex applications with very few lines of code. But, fundamentally, high level languages are detached from the hardware, are sandboxed into virtual machines / interpreters / renderers (e.g., browsers), are domain-specific (e.g., SQL, Matlab, etc.), and hide away many implementation details and does not grant the programmer access to them. So, there are tons of limitations associated with high level languages as a result of that, but for 99% of the kind of work you do with those languages, that's not a problem. Most low level languages are virtually limitless when it comes to what you can do with them, however, they don't make your life easy when it comes …

iConqueror commented: thankyou +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Problem solving skills is a combination of many things, and that's why it's difficult to pin down as one particular thing you need to learn.

There is definitely an analytical aspect to problem solving, which is sort of what Hiroshe is hinting at. This is the ability to extract, from a problem definition, the essential aspects of it and frame it in an appropriate way in your mind. This is where classical computer science knowledge is useful, i.e., knowing all those classic algorithms and kinds of problems (e.g., TSP, clustering, sorting, etc..) and how to analyse them.

But another major part of having problem solving skills is having a large bag of practical tricks or patterns. Because the reality is, most programming tasks don't involve you sitting down and "analysing" the problem at hand, they involve you pulling the appropriate trick out of your bag of tricks and apply it to the problem. In other words, it's mostly about "ah, this is this kind of a problem, I have a trick for that, here it is..". At first, as a programmer, you might find yourself analysing every problem to come up with the best solution for it, and then, it gradually transitions to having a bag of tricks big enough to quickly solve almost any day-to-day programming problem. And this is something you can only learn through experience and practice.

Yet another aspect of problems solving skills is the ability to anticipate road-blocks or "theory meets reality" issues. This is …

iConqueror commented: thankyou +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hi Glen, and welcome to Daniweb!

I'm also from Canada (Quebec). And we have the same last name (Persson), I presume you have some Swedish ancestry too? My father is Swedish.

glen.persson commented: Hi, yes my ADOPTED dad was from Denmark, born in Sweden. We lived in Dollard Des Ormeaux (spell) for four months in 2007. LOVED the nice people and wonderful bright grocery stores. Now we are in Alberta for the time , we spend winters in warm BC. Offline +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster
  1. When I was about 14, I think. Started in Visual Basic, but quickly moved on to Delphi, and to C++ a few years later. But used many other languages too along the way (C, Fortran, Python, Java, SQL, html/js/php, Matlab, etc.).
  2. C++ because of the endless possibilities from close-to-the-metal stuff to high-level sugar-coated coding, and never any undue compromises.
  3. No, I thought it was fun and intuitive from the start. And when it got "hard", I welcomed the challenge.
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

<M/>, I think that RikTelner was trying to put up Hitler as a pretty good contender for the dreaded title of most evil man in history, since the world war that he started ultimately led to at least 60 million deaths (excluding the Japanese side of the conflict).

The death toll of Stalin doesn't reach that high, even by the most enflated estimates. Stalin's death toll is probably closer to 20 million. There is no doubt though that Stalin is easily in the top 3 of most evil people in history, but I don't think you could put him above Hitler on that list.

And also, it helps to be reminded that things are always seen from a particular point of view, and there is usually an opposite point of view too.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, from the world-bank census's, the empirical number today seems to put most "normal" countries (i.e., excluding island states and really large countries like Canada, Russia, etc.) between 0.1 and 0.5 hectares per capita, which is about 0.001 to 0.005 km2 or 1 to 5 thousand square meters. So, that's basically a patch of land between 30x30 meters to 70x70 meters. Also note that many countries on the higher end of the spectrum export and/or spoil a lot of their food, while countries on the lower end import food, ration it more, and/or starve.

If you look at the world data, you have about 7 billion people and about 14 million km2 of arable land, giving a similar ball-park figure of about 2000 square meters (about 45x45 meters) per person.

But, of course, there are many caveats to these figures.

For one, it doesn't take into account sources of food like fishing, hunting, raising animals (needs land for grazing, and that land is not considered "arable" because you're not planting crops on it), or sea-based cultures ("fish farming"). In other words, it is definitely not a measure of "how much land would I need to survive".

Also, modern agricultural practices (i.e., the "green revolution") is largely responsible for most of the dramatic reduction of these figures over the years. In other words, even if you had a pretty good estimate of how much land / resources is needed to feed one person, within our modern agriculture system, it would not …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Planes are represented in the general linear form. This form can be understood as a normal vector n = (a,b,c) and a distance d between the plane and the origin, along the normal vector. So, for any point (x,y,z), you can compute the following:

a*x + b*y + c*z + d

which will be 0 if the point is on the plane, and it will be positive or negative, depending on whether the point is above or below the plane. This is the operation that DX calls "dot product of a plane and a vector". It is essentially the dot product of the vector with the normal of the plane, and then, offsetted by the distance of the plane to the origin, such that you get these negative (below), zero (on plane) and positive (above) values.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This line declares a variable called pNf which is an auto-pointer to a Notification object, and it is initialized to _queue.waitDequeueNotification.

The AutoPtr is a class template. And the Notification class is it's template argument, making AutoPtr<Notification> an auto-pointer to a notification object. I believe that it is essentially equivalent to the standard unique-pointer.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you print out all the class sizes, things will become a bit clearer. If I run the following:

#include<iostream>
using namespace std;

class Base {};

class D1 : public Base {};

class D2 : virtual public Base {};

class DD : public D1, public D2 {};

int main()
{
    cout<<sizeof(void*)<<endl;
    cout<<sizeof(Base)<<endl;
    cout<<sizeof(D1)<<endl;
    cout<<sizeof(D2)<<endl;
    cout<<sizeof(DD)<<endl;
    return 0;
}

I get as output:

4
1
1
4
8

So, for a pointer size of 4, the Base class has a trivial size of 1 (because the standard says it cannot be 0), the D1 class has the same size as Base because that's all it contains and there is no virtual inheritance there. Then, the D2 class has the size of 4 because it needs to contain a pointer to the virtual base class, and that virtual base class' storage is absorbed in the D2 class (empty base-class optimization). And finally, the DD class has a size of 8 because it is composed of a subobject of class D1 and a subobject of class D2. The natural memory alignment of the platform is 32bit (4 bytes), and therefore, the compiler will optimize the memory layout such that variables (data members, incl. virtual pointers) fall on 4 byte alignment boundaries. So, the first 4 bytes of memory are used by the D1 subobject, where only really 1 byte is needed while the other 3 bytes are padding (unused bytes), and the last 4 bytes are used …

tapananand commented: Awesome Answer!! Thanks a lot!! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Since robotic motion planning is, generally-speaking, the topic of my PhD thesis, I can tell you quite a bit about this, probably far more than you really want to hear. I'll just try to give you a quick overview, with some key terms and places to start looking. If you really want precise information and recommendations, you are going to have to explain in more details what your problem is really about (objectives, constraints, environment features, etc..).

If you want to do motion planning for a lawn mower around obstacles, then there are a ton of options, which mainly depend on what kind of sensors you have (or expect to have). If you don't have any real capabilities to do mapping of the environment and doing localization of the robot, then there isn't much you can do except what the rumba-like robots do, which is to go in a straight line, bump in an obstacle, turn around to some random direction, and repeat, it's just the random nature of that algorithm that guarantees full coverage of the space with high probability after some time.

If you have local information on obstacles, such as lidar scans or some vision system, then you can consider local planning strategies, such as potential fields (see here).

To fulfill your objectives (mow the lawn, with some pattern), you might want to use something like a moving target chasing method, for example, many people use missile guidance algorithms for this kind of stuff. These …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Reverend Jim thought that you were talking about 4K as in 4 kilobytes (or 4KB), and by that measure, the size of a 4KB file is, well, 4 * 1024 bytes. He simply did not realize you were talking about that new ultra-high-definition format for video and televisions that is commonly called "4K", which refers to the fact that it has 4 times the resolution of 1080p video.

Since 4K is 4 times more pixels than 1080p, then a rough approximation of the file sizes will be 4 times that of an equivalent 1080p video file. But, of course, most video compression methods aim to achieve as much compression as possible without negatively affecting the quality of the picture, i.e., it merges frames or parts of the frames that are static of several video frames (e.g., a static background), and also compresses groups of pixels that have nearly identical color values (e.g., making bigger pixels). So, when the resolution is as high as 4K, there is probably a lot more opportunity for compression too, i.e., when the resolution is higher quality than what you can perceive with your naked eyes, it can compress a lot before you notice any difference. So, I would say that it might not be as much as 4 times the size of 1080p video, maybe as low as 2 times bigger, or even less, depending on the encoding quality settings.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

doesn't if(--count) is equal to if(--count == 0)

No, they are exactly opposite. If you having any integer or pointer variable, call it a, then writing if( a ) is equivalent to writing if( a != 0 ), because any non-zero (or non-null) value converts to true, so, if(a) is true as long as a is not equal to zero.

I thought the author just want me to implement it just to test my basic template building skill then apply it to a designated container.

Or maybe the author just wanted you to discover the issues that you are discovering right now. If that's the case, then that author is smart, because the best way to learn is not to be told step-by-step how to do things, but to discover the issues and search for ways to solve them on your own.

Yet, you just make me realized that I still have bad habit and prediction skill, such as 3, 4, 5, and 7.

That's alright. Bad habits die hard, but the earlier you become aware of them, the better.

1,2 6, and 8 is intentional(too lazy, use anything as long as it is "working").

Well, 1 and 2 are not really optional. Not having those constructors / operators is a bug, and of the worst kind. The worst kind of bug is when it still is "working", or so it seems, but it is actually doing the wrong thing, and silently …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You cannot do a forward declaration of a class outside of its namespace, you need to do this:

namespace std {
template <class> struct hash;
}

I think that will solve it.

Also, I'm pretty sure you need the template <> on the definition of the hash's operator() for your full specialization.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Ok, about the error, it's very simple, I should have caught it earlier... You need to using new to allocate an object to pass to the shared-ptr:

return shared_ptr<T>(new T(std::forward<Args>(args)...));

in your make-shared function.

But looking at your shared-ptr class, there are obvious problems:

template <typename T> 
class shared_ptr {
  public:
    shared_ptr() = default;
    shared_ptr(T* point): p(point) { ++count; }
    shared_ptr(T* point, std::function<void(T*)> rem):
        p(point), del(rem) { ++count; }
    shared_ptr(const shared_ptr<T>& val):
        p(val.p), count(val.count), del(val.del) { ++count; }

    T& operator*() const { return *p; }
    T* operator->() const { return & this->operator*(); }
    std::size_t use_count() { return count; }

    void deleting()
        { del ? del(p) : delete p; }
    ~shared_ptr() { if(count) deleting(); else --count;}
  private:
    T *p = nullptr;
    std::size_t count = 0;
    std::function<void(T*)> del;
};

Here is a few problems that immediately pop out:

  1. No copy-assignment operator.
  2. No move-constructor and move-assignment operators.
  3. Single-parameter constructor not marked with explicit.
  4. The test if(count) should be if(count == 1) (or, use if(--count == 0)).
  5. The reference count needs to be a shared state between all shared-pointers that point to the same object. Just consider this situation ("sp" for shared_ptr<T>): sp p1(new T()); sp p2 = p1; sp p3 = p1;, which will result in p1 having a count of 1, p2 and p3 both having a count of 2, and the object will be destroyed as soon as p1 is destroyed, regardless of the situation, i.e., there is, in effect, no reference counting.
  6. You have a lot …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Given that the biggest vulnerability in any network is the human beings using it (e.g., not protecting physical access to key machines, leaving for a break and leaving the computer logged in, using simple passwords or none at all, visiting dubious websites while at work, etc..), I would fear that any autonomous network defense software, if very clever, would deduce or learn that best way to protect itself is not letting any human being near it, rendering the whole network useless to the people that are meant to use it. Basically, like HAL, i.e., shut the humans out to minimize the threat to the "mission".

1) Neuter your system. Eliminate the noisy pieces, simply assumptions, and design for a mathematically precise environment.

2) Expect you will have faults. Design for the elements that occur every day in practice.

I would say there are many more options than that. Those options you mentioned are what I would consider parametric approaches, in the sense that it tries to model (or parametrize) all the possible faults and then either eliminate them from the analysis / experiments (1) or design to avoid or mitigate them (2). Typically, in research, you start at (1) and incrementally work your way to (2), at which point you call it "development" work (the D in R&D). But approaches that have had far more success in the past are the parameter-less approaches. The idea there is that you don't try to understand every possible failure, you just …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

it should print 10(1+2+3+0+1+2+0+1+0)

So, you want to add together the first three digits of each row. Right? I say that because NathanOliver's code assumes that you want to interpret the first three digits as one number, and add the three (for each row) together, i.e., it should print 145 (123+12+10).

Your code:

for(i=0;i<3;i++)
{
    readob>>a[i];
    total+=(int)a[i];
}

is not correct, but almost. First of all, you don't need a cast to int, because char is already an integral type. Second, you don't actually need an array of chars, because you use it only temporarily inside the loop. And last but not least, the character (digit) that you read from the stream is a string character, i.e., a byte value that ends up translating into a printable digit (see the ascii table of one example of the encoding used). In other words, a char that is printed as "2" will actually have a numerical value (when treated as an integer) of 50 (if ascii is used, but it could be something else). So, your additions of 1+2+3 is not going to give 6, but probably 150 (49+50+51). To convert a char digit into an integer number, you can just do c - '0' because the digits are always (I think) in order in the encodings (ascii, UTF8, etc.). So, with those things in mind, your code should be:

for(int i = 0; i < 3; ++i)
{
    char c;
    readob >> c;
    total …
Learner010 commented: very helpful +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Generally, before being a great reverse engineer you have to be a great engineer. So, I would advise that you start by trying to become a good programmer before you even consider going down this road, which is the wrong road, btw. For any experienced programmer, there is no real mystery about how these software cracks are made, it's pretty straight forward, such as in Hiroshe's example. Making keygens is more of a matter of how naive the verification function is.

There are some tools that people use, but they are probably not what you expect, i.e., they are not "automatic" cracking software. They are tools like disassemblers and similar tools like in-memory bytecode inspectors. Either way, you end up dealing with machine code or bytecode (almost like machine code), which you then have to comb through to find an opportunity to circumvent the security.

This isn't rocket science, just a lot of patience and bad intentions.

And to that point. The rules of this forum do not permit the discussion or promotion of illegal activities. We cannot condone such activities and I don't expect anyone will (or should) give you any precise instructions on how to crack software. I think that if anyone would go too far beyond the kind of vague explanations I just gave, I, as a moderator, might have to delete that post (and possibly issue an infraction against the rules of this forum site).

I am bit confused where this question should be placed.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@sami9356: We don't appreciate it much when people simply copy-paste their assignment questions to this forum. You are unlikely to simply get an answer. The answer to your question can be found within some explanations of how to correctly and efficiently implement an assignment operator, here are a few tutorials on that: here, here and here. The two problems to check for are mentioned in all three articles, if you read them, you will find your answer.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I wasn't born yesterday, and I recognize that little game you're playing by constantly diverting and deflecting, trying to trigger more responses and frustrations. I believe the colloquial name for that is "trolling". The answers to your questions and concerns have been pretty clear and comprehensive at this point. I see no reason to continue elaborating, and unless you provide more substantive explanations of your (mis)understandings, I would advise others not to waste any more time on this guessing game either.

If you are truly genuine about your misunderstanding of this situation, you are doing a poor job at communicating that. I advise you to provide a clear and comprehensive explanation of your concern or question. Nobody wants to keep guessing what you mean or want to know, without you clearly expressing it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is where the logic is wrong:

"since the default is only and only executed when no case matches"

That's not true. If no other case matches, then the switch will jump to the default case. It's an "if-then" rule, not a "if-and-only-if" rule. The fact that no other case matches is sufficient to get the default case executed (jumped to), but it is not necessary. In other words, it's a one-way rule:

  • "If no case matched, then execute default": true
  • "If execute default, then no case matched": false

So, with that faulty logic statement removed, your problems vanish.

So, to your question, "Why???", well, the reason is that if you understand the rule correctly, the compiler does exactly what it is required to do.

By the way, this is not a problem with C++, but rather with C (or one of its earlier predecessor). I would expect that all programming languages derived from C (which are essentially all mainstream programming languages used today) have the exact same behavior for switch-statements. So, that's another answer to the infamous "Why???" question: it's just traditionally always been so, since the dawn of programming (i.e., when Ritchie / Thompson created C). AFAIK, only languages derived from Pascal (which are nearly extinct now) have the behavior that you are describing.

Furthermore, there is a technical reason "Why???" the switch statement has this behavior. Basically, a switch statement is a series of GOTO statements with case-labels. GOTOs and labels are the most …

TrustyTony commented: Right! +12
ddanbe commented: Deep knowledge! +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The mechanism for calling virtual function is, technically-speaking, unspecified or implementation-defined, meaning that there is not actual guarantee about how it is actually done. However, it is almost always accomplished the same way, via a virtual table. So, for each class that defines or inherits virtual functions, the compiler will generate a table (array) of function pointers, each pointing to the appropriate function.

The table is laid out such that each specific function has its specific place (index) in that table. And the placement of the functions is essentially hierarchical for all the inheriting classes. So, let's say you have classes A, B, and C, where C derives from B, which derives from A. Say that A has a virtual destructor (as all base classes should) and a virtual function called "foo". Then, B has a virtual function called "bar" and C has one called "foobar". Then, the virtual table for class A will have (destructor, "foo") in it, then the table for class B will have (destructor, "foo", "bar") in it (in that order), and finally, C will have (destructor, "foo", "bar", "foobar"). Because of this hierarchy, the virtual table of C "looks like" a virtual table for class A, if you only look at the first two entries. That's how that works.

Finally, whenever you create an object, there is a pointer within that object (which you cannot see, because the compiler generates it for you), and that pointer points to the virtual table of the most-derived class …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think it's perfect. Pretty much. All it's missing is fitting in a hammer and sickle somewhere on the UI. ;)

diafol commented: heh +0
blackmiau commented: I call dibs on the sickle! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

shouldn't the depth be between 0 and 1 (if within the frustum)?

The projection matrix takes the vector to the so-called "clip" coordinates, not the screen coordinates. See the last section of this complete explanation.

The depth value will only be between 0 and 1 once you have done the division by the 4th component of the vector. In other words, you start with the "world" vector:

Vm = | Xm |
     | Ym |
     | Zm |
     | 1.0|

which you then multiply by PxM, to get the "clip" vector "Vc" as this:

Vc = P x M x Vm
Vc = | Xc |
     | Yc |
     | Zc |
     | Wc |

And then, you can get the "screen" vector (where x and y are between -1 and 1, and z is from 0 to 1) as so:

Vs = | Xc / Wc |
     | Yc / Wc |
     | Zc / Wc |

You can recover Vm from Vc by the simple (P x M)^-1 transformation, but recovering Vm from Vs is a bit harder (non-linear), but not impossible either, of course.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

OOP is a programming paradigm, which is to say, it's a way to reason about the code, i.e., the way you see / understand the code. It's a kind of philosophy, if you want. In very general terms, it's about looking at the overall application as a collection of objects, each belonging to a certain class that can play certain roles in the software, that, together, provide all the functionality that make the application work. Then, there are a number of practical patterns and abstract concepts that are attached to this paradigm, such as inheritance / polymorphism, encapsulation, abstraction, design by contract, etc...

Some languages have these practical patterns and abstract concepts more deeply ingrained in their language rules than others. I wouldn't really say that any language is really "pure" OOP, because programming paradigms are, first and foremost, about what goes on in the programmer's head, not in the code. It's also important to understand that programming paradigms are not mutually exclusive and don't have clear dividing lines.

You can write in an object-oriented way in C just as well as you can write in a procedural way in Java, and both of those things are actually very common. Like many other "philosophies", doing "pure OOP" software is something that only academics can afford to do. And in practice, no language can force you to remain "pure" in whatever paradigm is favored by that language, because the language doesn't control how you think, it only influences how you implement …