mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think that most false advertising laws do allow for some level of approximation, even if it favors the advertiser.

Also, if I had a dime for every time I bought some tech that didn't live up to specs, I would be a rich man. I have quite a bit of experience with batteries, and I must say that the mAh charge capacity values are nothing more than ball-park figures.

Another thing to consider is that the charge capacity that is generally quoted for batteries is the total charge capacity, not the useful charge. That means that the figure is the amount of charge you could extract from a fully-charged (and brand new) battery if you were to discharge it completely, called a "deep discharge", which will leave the battery completely useless (cannot be charged again after that, if it even survives the deep discharging process without bursting). The actual usable capacity of the battery is usually about 50% of what is quote on the label.

Also, have you considered the possibility that the battery that used to be labeled as 2000mAh is actually the same as the one that is currently labeled 1900mAh? It is possible that the company was originally giving an inflated figure, and got a slap on the wrist (e.g., lawsuit or complaints) and readjusted the label to a more accurate figure.

There is also a possibility that the company can no longer produce this battery due to regulations that prohibit it. Regulations on batteries are …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Ice cold Pastis de Marseille.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't think that constexpr is used that much yet. And as far as standard libraries or Boost libraries, they will only use constexpr if it is supported by the compiler they are being used with. Such as the macros used by Boost.

Although it can probably be used in interesting ways, constexpr is mostly a syntactic sugar (for when you need a compile-time constant value, it avoids requiring C++03 template meta-programming) or about enabling some minor optimizations (pre-computing some expressions at compile-time). In other words, it is not really a necessary feature nor something that cannot be worked around when necessary. The point is, there probably aren't any important libraries for which constexpr is an absolute necessity, at least, not yet.

should i use GCC instead of VS?

I would say yes, but that's my opinion (I'm pretty sure vijayan121 would disagree). I think MSVC is a terrible compiler, period. Any other option is better. Also, if you feel adventurous, you could try the brand new, fresh out of the oven, Windows version and/or MSVC-compatible version of Clang.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Is documentation systems something like Java libraries?

No. A documentation system, which is usually called a documentation generator, is a tool (program / application) that can be used to generate documentation about your code directly from your code (provided that you have some annotations in it). Notable examples are Javadoc (Java), Doxygen (C / C++ / Java / C# / Fortran / ..), EpyDoc / Sphinx (Python), and others. I would say that for most projects they are necessary but not sufficient. You just write code as usual, and you add comment blocks before functions and classes to describe what they are, what they do, and what parameters they take, all using the standard tags (like @param, @author, etc.). The documentation generator will look through all your source code, pick out those functions and classes along with their associated comment block, it will parse the tags and generate a nice-looking, searchable documentation file (various formats, like HTML, help files, pdf, etc.) with all of that. It's a nice way to have an up-to-date and complete API documentation of all your classes and functions, but in general, it is not sufficient because you need more of a "narrative" to make the documentation useful for outsiders (who want to use your code), such as tutorials, overviews, examples, etc.. But that generated API documentation is a must-have.

Do you know how to use revision control?

A revision control, or version control system, is …

<M/> commented: How.... how do you write so much on every post (in a good way)! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In my lab, we have several computers that run Windows XP. They are all machines that are tied to a particular piece of hardware that they run, and we cannot upgrade the software to run that specialized hardware, and therefore, we can't upgrade the OS. But my university put in a policy, as soon as XP was decommissioned that they would no longer allow any XP box to connect to the internet. So, all of our XP computers now run on closed networks, or I've set up firewalls to deny them outside access when connected to a network that could reach the internet. This is really the most that I can do, since upgrading is not an option.

I would imagine that many businesses are in a similar situation with some computers (maybe even most computers). So, maybe that statistic is a bit inflated by those companies that "still run machines on Windows XP" but keep them in closed networks. People have a tendency to assume that all PCs / computers are desktop computers that an employee is using to check his emails. Under my desk, I have 3 computers (two on XP, one on QNX, all about 10 years old) that only run hardware, no internet connection. In an engineering place I used to work at, they had several computers dedicated to simulation tasks, again, running old systems and on a closed network. Have you ever seen a modern manufacturing plant? There are desktop computers everywhere, mostly sitting in …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

From what the guys over at Clang and GCC have said, the constexpr feature is actually one of the hardest features to implement from C++11. I think it's because it actually constitutes (another) compile-time Turing complete language on top of C++.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just posting this through my phone and it seems to be working quite alright. I find the mobile interface nice and clean but it can be a bit difficult to navigate.
Using chrome on Android phone.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Never heard of an FPAA can you explain.

I had never heard of FPAA either, until I did a little looking up of some practical ways to implement analog computations. From what I gather, they can be used like FPGAs, but instead of programming digital logic gates, you program analog circuits. It seems like a nice way to make an array of analog computers in a small and efficient package. I can certainly imagine that there would be a lot of noise generated by all those transistors being packed on a single IC. But for an application like this (modeling neurons) the noise is probably inconsequential. If you were designing analog signal processing (which is generally aimed at removing noise or demodulating signals), FPAAs would not be suitable due to the noise level. But with neuron emulation, it is a kind of pseudo-digital system (due to the thresholded firings), meaning that the noise is not going to be a factor.

The key thing to remember with "the brain" is that it is a big approximation calculator. In other words, everything approximate, signals are fuzzy, and results are probabilistic. So, a digital logic system (like FPGAs or CPUs), whose main characteristic is the ability to make exact calculations, is an odd fit in the midst of it all.

Can we assume that modeling the neuron more closely and more accurately will make any difference to our understanding of intelligence?

Does your understanding of transistors help you understand …

TrustyTony commented: Agree to change title to better describe content +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

By the way, if you want to install software updates through the command-line, you can do this:

$ sudo apt-get update
$ sudo apt-get upgrade

And if there are kernel updates, do this:

$ sudo apt-get dist-upgrade

Just wanted to point that out, in case you didn't know.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You might want to try this fix. I guess it must be an issue with gksudo / kdesudo that doesn't respect or use the same sudo / sudoers list.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is that your new user is not a "sudoer", which means a user that can run commands as super-user. To grant that user the right to run such commands, you have to add it to the "sudo" group. To do this, you must login as a sudoer account (e.g., as root, or as the original user account from which you could install updates and such). Go to a terminal, and write this command:

$ sudo adduser <username> sudo

where <username> is replaced by the name of the new user (note, you will have to enter your original user password). At that point, you should be able to run sudo with the new user, you can test that out by just logging in with that account and try as command like $ sudo echo "hello" and see if it succeeds.

There are also a few other ways to add a user to the list of sudoers, you can google for them if the above does not work.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm like prit, I have a on and off beard from just being lazy at shaving it. That's fairly common with my colleagues too. But when I shave I leave the sideburns, if that counts as a beard.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The reason for the limited amount of "peeking" is because certain types of streams cannot accomodate more than a single character peeking (and no seeking). But for the streams that can easily support moving back-and-forth (seeking) in the underlying data, there is no reason to use peek() instead of the tellg / seekg method. For instance, file-streams can accomodate seeking around the data, although it's preferrable to read/write sequentially or seek as little as possible. In the case of string-streams, they support seeking around the data with ease, and there is essentially no significant cost to seeking back-and-forth anywhere in the data.

So, don't use "peek", use the tellg/seekg method, as so, for example:

std::string peek_next_word(std::istream& in) {
  std::string result;
  std::streampos p_orig = in.tellg();
  in >> result;
  in.seekg(p_orig);
  return result;
};

Basically, any kind of implementation of a multi-character peeking function would have to use a mechanism similar to the above code (tellg / read / seekg) and that's why there is no reason to have a separate function (e.g., "peek_n(str,n);") in the standard streams to do this, because it's not gonna be more efficient than this tellg-seekg method.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Awww.... sounds painful. I hope it gets better soon, hang in there Dani.
Kram.

(N.B. "Kram." is swedish for "I wish I could give you a hug." or "I send you a hug.")

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A deduced type traditionally means a template argument type. Like this:

template <typename T>
void foo(T&& t) { /* .. */ };

int i = 0;
foo(i);  // type T will be deduced from this call, from the type of 'i'

However, with the new standard also comes the keyword "auto" which is another form of deduced type, such as this:

int i = 0;

auto&& t = i;  // type 'auto' will be deduced from the type of 'i'

Universal references simply means that when you have a situation such as above (deduced template argument or "auto") followed by double ampersands, then the overall type of the "t" in the above will be whatever is the most appropriate reference type, i.e., const or non-const, lvalue or rvalue reference, depending on whether the variable initializing the reference (incl. passed to the function template) is an lvalue (named variable) or rvalue (temporary value, or moved-from lvalue) and whether it is const or not.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think you have the right idea, but since everything is better explained with some terrible bits of ascii art, so here it goes.

Currently, you have this:

(Old API)  --->  (Old Impl)

where the "Old API" is that set of advanced / simplified functions that is currently used to call that library, and the "Old Impl" is the current implementations of those functions.

I think you were originally talking about doing this:

(Repaired API)  --->  (New Impl)

where you were trying to find a way to make the old API a bit better or more organized, but without breaking compatibility. It's kind of difficult to do that, and often just leads to only very trivial improvements to the API (such as reorganizing the header files' structure).

The solution I recommend is either one of these two:

(New API) ---------------------> (New Impl)
                            ^
(Old API) --> (Glue Code) --'

Or:

                        (New API) --> (New Impl)
                            ^
(Old API) --> (Glue Code) --'

The idea is that you create the "New Impl" such that it is primarily meant to be called and used with some "New API" that is a complete re-designed API that is more modern, robust and modular, and whatever else you want to add to it. In other words, you design the new API and the new implementation together (it could literally be that the "New API" is a set of classes declarations, and the "New Impl" is just …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

things like Weka and RapidMiner

These seem (at first glance) like two very typical examples of the kinds of "tools" you often find for specific purposes (e.g., analytics).

Weka is an academic project, probably written by students to try out specific algorithms, it's written in an academic language (Java), has a terrible by-the-book API with poor documentation (probably due to inexperience), and is, of course, half-finished. These problems are very common of libraries coming out of academia (including my own, in all honesty).

RapidMiner is an commercial tool, which functions by a whole different set of rules: the profit motive. Generally, these commercial tools don't want you to be able to "break out" of them, because that hurts their bottom line. They want you to adopt the tool, use it exclusively, and keep renewing your licenses and support plans. If they made an easy to use redistributable library with a nice API, you could just write your own code that calls it for the specific tasks that you need it for, and then be done with it. You would probably never need to change this again in a long while, and you wouldn't need much support or to upgrade. That's not a good revenu model for these companies. Instead, they give you a "full-feature" "graphical user interface" "no need to program anything" piece of software that they advertise to you as the perfect "all-in-one" solution for your data analytics needs. You get that software, you start creating stuff …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If I understand correctly, it sounds like you have some old library that is poorly organized, and you are tasked with porting it to a new support library (e.g., Qt). It sounds like that poorly organized interface (set of functions) for the library is something that you would eventually want to re-design, but that in between, you need to keep that interface such that you can do some regression tests when the port is completed.

Is that assessment correct? If so, then you have to realize two things: (1) you can't really significantly change the interface, and (2) you don't need to. In other words, you shouldn't worry about the organization of the interface, only the back-end. So, you should just stop worrying about that whole simplified / advanced business and how that is done, because it has to remain that way, and you have no choice.

Now, if you're going to eventually re-design that interface to solve some of those robustness issues that you said the library's interface has, then you need to start thinking about how the new interface is going to be structured, and then, model your back-end (Qt port) of the implementation to tailor it towards that new interface. Then, you write whatever code is needed to glue together the old interface with the new back-end so that you can do your regression tests. And finally, you write the new interface, and migrate whatever user-side code you need to migrate. Does that sound like a good …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For help viewers, you should use Qt Assistant help-files, which are the Qt equivalent of Windows help files / menus. The Qt Assistant framework has all the features you are describing, and the help files are easily generated from html pages. With that, it's just a matter of launching it upon the OnClick signal of a button (don't use "QDialogButton", that's for buttons that are inside a QDialog, such as OK/Cancel buttons). You can check out this tutorial about integrating Qt Assistant in a Qt application.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

master.cpp includes advanced.cpp and simplified.cpp.

I would not recommend doing that. There is really no reason to create a "master" cpp file that includes a bunch of other cpp files. You should compile them separately and link them together (either as a static or dynamic library).

I generally find it acceptable to create some "include everything" header files, and many libraries do exactly that. Although, I would still recommend breaking things up into different headers, such that users still have the option to include only the minimal number of headers for the specific functions / classes that they need. Several Boost libraries are structured this way, i.e., you have the choice of either including the "include all" header to get everything, or including just the individual headers for individual features of the library.

I might note that this does not appear to slow down compilation at all. I suppose that it should in theory since the pre-compiler would need to hook more files together, but this has not been my experience - in fact if anything it seems to speed up compilation.

If your experience is limited to this 2,000 LOCs library, then I can easily imagine that you have not found it makes a difference in compilation times. Also, C-style code (no classes, no templates) typically compiles very fast anyways. 2,000 lines of code is a trivial amount of code, and I would expect it compiles very fast and easy. If we moved that figure …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Learning occurs by adjusting the weight matrix in a neuron

Care to explain how? The logic required to update the weights is generally-speaking exponentially higher in computational cost than the actual neuron firing logic / dynamics, which is quite trivial to compute, even on an ordinary PC. One of the big problems with ANNs (Artificial Neural Networks, which is what you are describing, and btw, you should not use the general term AI as being synonymous with ANNs, because ANNs are a very tiny subset of AI) is the fact that algorithms to learn the weights of an ANNs are either (1) computationally intractable, (2) highly undercontrained, or (3) very crude and thus requiring a life-time of learning (which is the way animal brains learn).

The forward-eular trick can be used to approxiate exponential decay thereby reducing the calculation computations.

It seems to me like the whole thing could be done with an analog computer. This whole thing, in fact, looks like the dynamics of a fairly ordinary switched-capacitor circuit. I would imagine that with a few op-amps and transistors, this could be realized quite easily. If you need fast and instantaneous simulation of some dynamical system, analog computers are your friend, they've been used quite a bit in robotics to do some of the heavy calculations instantaneously. In fact, why not try this with a FPAA?

Also, if you are going to using forward-Euler, you might as well just stop thinking …

iamthwee commented: nice +14
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are quite a few pieces of software that can do that.

Own nice option is to just use rsync. This software is easy to automate (e.g., bash script / cron job), and provides many useful options and features. It's an incremental backup tool, meaning that each time (e.g., each night / week) you do the rsync'ing between the client and server, it compares the file-systems (files, file-sizes, folders, etc.) to find any discrepancies between them, and then it transfers only the data needed to bring them back in sync. There are also options for security (ssh), compression, etc. This is my go-to utility for backups. You can also do dry-runs (with the --dry-run option) to simply process the file-systems to know what has changed, without actually performing the sync'ing operation.

For full image backups, you can also use dd over a network, but that's not incremental, meaning that it will require a lot of bandwidth.

Another option is to rely on some journaling file-system for your clients, or better yet, on a copy-on-write file-system such at btrfs. With a file-system like that, the incremental changes made to the partition are recorded as an integral part of the file-system, with options to move back and forth in time and send the diffs over networks. Btrfs is essentially the Unix/Linux equivalent of Apple's Time Machine/Capsule.

If you want to check for changes in the hardware of your clients, you should be able to pick that …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This isn't really a problem of templates, it's just the problem that you don't have a function named maximum that could take 3 values. Your maximum function template can only take a single value (not to mention that that function is not going to work at anything, because it has obvious errors in it).

If you need a maximum function that takes three values, you need to define one that takes three values:

template <typename T>
T maximum(T v1, T v2, T v3) {
  //.. code here..
};
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The fact that most Python libraries are written either in C or C++ (or Fortran for NumPy / SciPy / ..) should tell you something about the limitations of Python as a system language. Thinking that Python is appropriate as a system programming language is missing the point of Python altogether, it's not understanding its strengths.

Having been exposed to some of the details of how operating system kernels and drivers are written, I can tell you that this can definitely not be done in a high-level language like Python. And even if you really wanted to, you would have to strip away many aspects of Python to the point that it wouldn't really be recognizable as such, it would become essentially a C-like language with Python syntax. And what would be the point of that? We have C already which has a syntax everyone is familiar with. I could imagine a stricter and more feature-rich language being used instead of C, maybe something like Embedded C++, but that's about as far as I think it could even go away from a low-level C-like language.

There are a number of things that are very common in kernel programming, which are, for the most part, impossible in Python (AFAIK). For one, having direct memory control and addressing. Remember, much of kernel code runs without virtual addressing (directly accessing physical memory addresses), and thus, need to manage and map their memory space precisely, byte-per-byte. This rules out any language that does not …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This code looks very odd. I've been trying to figure out what language it is, but I can't. It has the hallmarks of a Pascal-style language, i.e., the := assigments and the for r:= 1 to p do syntax, which are both definitely Pascal. However, the use of { } instead of begin end makes it very odd because all Pascal-style languages that I know would use the latter.

And then, the sum(i:= 1 to N, v[i,r]) = 1; lines baffle me a bit. By Pascal syntax, the single equal sign should mean a test for equality, which doesn't make sense without some sort of if-statement or conditional. If it's an assignment, then what is being assigned? The result of the sum? Makes no sense.

Is it a symbolic math language? As in, that line constrains the column of "v" to have a sum equal to 1. This does indeed look like something you could write in Maple, except that the syntax is a bit off. If this is a Maple-like language, then that code cannot be translated to C++, because it requires a symbolic math solver to run this. It might be any of the many CAS software that I am not too familiar with. If you need to run this kind of a program in C++, then you would have to link to a CAS library like GiNaC, Maxima, SAGE, SymPy, etc...

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The safer way to input an integer from cin is to use the following:

if(!(cin >> choice))
  cin.clear();
cin.ignore();

What it does is this. If the input operation failed (e.g., you did not enter a valid integer value), then it will put the stream in a failed state, which is what is tested with the if-statement here, to be interpreted as "if ( NOT ( stream in good state ) )". Then, it calls "clear()" on cin to clear the error state to make the stream readable again. Finally, in any case, it calls "ignore()" on cin such that any other thing that was inputted after the number itself is ignored, such that the next operation on the stream is not going to pick up those stray characters.

Without this, as you currently have it, if someone enters a non-numeric value, it reads nothing and puts the stream in an error state (std::ios_base::failbit) and all subsequent operations on it will have a similar effect, i.e., they will not read anything and just leave the stream in an error state. That's why you get into an infinite loop.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Maybe like Qt Designer?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Just so you know, the proper C++ version of that (mostly C-style) code is this:

#include<iostream>
using namespace std;

int main(void)
{
  int p = 0;
  cin >> p;
  cout << p << endl;
  return 0;
}

That readInt function just reads an integer for the standard input stream, and it works like this:

int readInt()
{
    // <-- skip all non-numeric characters --
    int cc = cin.get();
    while(cc < '0' || cc > '9')
        cc = cin.get();
    // -- end -->
    // <-- read each digit of the number --
    int ret = 0;
    while(cc >= '0' && cc <= '9')
    {
        // take the current number, multiply by 10, and add the digit.
        ret = ret * 10 + ( cc - '0' );
        // read the next digit
        cc  = cin.get();
    }
    // -- end -->
    return ret;
}

That's about as much as it could be spelled out. I replaced the getc(stdin) with cin.get(), just because that's more in C++ style (cin is the C++ input stream, and stdin and its related functions is part of the old C legacy that C++ carries around, which is usually to be avoided except in special cases). Also, the code should not include the <stdlib.h> header, for two reasons: (1) because it's not a standard C++ header (it's a C header) and the C++ equivalent for it is <cstdlib>, and (2) because it's not the correct header for what is needed there, which is stdin

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why do you need to put it in the header?

Ok, let's just assume this example:

// my_foo.hpp:

#ifndef MY_FOO_HEADER
#define MY_FOO_HEADER

void foo_impl();  // <-- this is the function you want to "hide"

class bar {
  public:
    void foo();  // <-- this is the function that uses the foo_impl function.
};

#endif

// my_foo.cpp:

#include "my_foo.hpp"

void foo_impl() {
  /* some code */
};

void bar::foo() {
  foo_impl();
};

So, if we assume that foo_impl is not going to be useful anywhere else, then the main thing to do is remove the declaration from the header and remove the external linkage on the definition ("external linkage" means that it's visible by the compiler / linker as a function that can be called from other source files). There are two ways to remove external linkage: give it a static specifier, or put it in an unnamed namespace. In the first case:

// my_foo.hpp:

#ifndef MY_FOO_HEADER
#define MY_FOO_HEADER

class bar {
  public:
    void foo();
};

#endif

// my_foo.cpp:

#include "my_foo.hpp"

// NOTE the 'static' here:
static void foo_impl() {
  /* some code */
};

void bar::foo() {
  foo_impl();
};

The word "static" applied to a function just tells the compiler not to make this function visible outside of this translation unit (we call that "internal linkage"). You can also achieve the same effect for a whole collection of things by putting them in an unnamed namespace, as so:

// my_foo.hpp:

#ifndef MY_FOO_HEADER
#define MY_FOO_HEADER

class bar …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm not a big fan of using typedefs to hide a pointer type. I just adds confusion and usually doesn't make the syntax simpler, often the opposite (int* becomes IntPtr or something like that). Even when using smart-pointers (as one should!) like std::unique_ptr or std::shared_ptr, you still need to make it clear in the name of the typedef what kind of smart-pointer it is, like std::unique_ptr<int> becomes IntUniquePtr, which is again not a huge gain in terms of syntax, and it often still leads to some confusion or some non-idiomatic code.

Other practical reasons people use typedefs are:

Reduce large template types and nested types to something smaller for within the body of a function, like so:

 typedef typename std::vector<T>::iterator Iter;
 for(Iter it = v.begin(), it_end = v.end(); it != it_end; ++it)
    ...

Naming the actual type only in a single place, instead of everywhere where it is used. Like this:

template <typename T>
class vector {
  public:
    typedef std::size_t size_type;

    size_type size() const;
    size_type capacity() const;

    void resize(size_type new_sz);
    ...
    // If std::size_t was used everywhere, then changing it would mean
    // you would have to change it everywhere. But now, you only have 
    // to change the typedef.
    // Note that the same is true for within a large function.
};

Hide away template types to make them appear as simple types, like this example from the standard (the std::string class):

namespace std {

  typedef basic_string<char, char_traits<char>, allocator<char>> string;

};

And, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, the size_type of any container (or std::string) is just the preferred type for indices and sizes related to that container. It is usually fine to use another integer type, such as int. There are a few dangers associated with not using size_type but they are very unlikely to happen in nearly all circumstances.

For example, if you do this for-loop:

for(int i = 0; i < str.size(); ++i)
  ...

The type of size() is size_type, and if that type is larger than int, then the value returned by str.size() could larger than what can be represented by the int type. This will lead to an infinite loop because when i reaches the maximum of its range, it will wrap around to the minimum of its range (large negative number) and the whole thing will never stop because i can never reach a value larger than size().

Using size_type guarantees that you won't get those kinds of problems, but obviously, you can see that such problems won't occur when the sizes or indices are small. So, just consider using size_type as the "playing it safe" option, which is what you would do when writing industrial-strength code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Great job Dani! Staying on top of your game!

I remember when I became a moderator. I think it was right in the thick of it, and I remember that the listing of "unresolved reported posts" (for non-mods: this is where "flag as bad post" reports go) was usually full (10-30 reports) all the time. Now, that listing is usually almost empty, meaning that just deleting spam posts here and there as you stumble upon them in the forums you visit as a mod is enough to keep up with the spammers. That's impressive!

There is still the occasional spam burst of posts, but those are easy to see, and quick to clean up and infract-to-ban.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that moderators should have access to the editorial articles before they are published and there should be some review process that allows the author or Davey/Dani to ask specific moderators to take a look at the article before it gets approved.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would also add that it really doesn't matter that much. If you have to support older compilers, then you don't have a choice, of course, since std::array is only for C++11. Otherwise, you can use whichever you like, but unless you make only trivial use of the array, you should prefer std::array just to keep things in line with other STL containers (e.g., what if you later decide to make the size dynamic, and use std::vector instead, then you will be happy that you used std::array because all you will have to change is probably the array declaration itself, and the rest will be the same, especially if you use auto and other type-inference features of C++11.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The process is that once you create a tutorial draft, it will be in this anti-chamber where you can edit it, save, edit, save, etc.. until you are happy with it. At that point, you should send a PM to one of the editorial members (happygeek, Dani, ..?) to get it approved. If it's good they will publish it as a tutorial. Moderators do not have any special powers when it comes to those editorial articles, they cannot publish them, nor review / revise them before they get published. That's the process, unless things have changed since the last time I published one.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is the difference betwen high level languages and low level languages?

Here is a bash script to answer your question:

#!/bin/bash
echo "high level languages" | sed -e 's/\ /\n/g' > hl.txt
echo "low level languages" | sed -e 's/\ /\n/g' > ll.txt
diff hl.txt ll.txt
rm hl.txt ll.txt

Run this and you will know exactly what is the difference between "high level languages" and "low level languages".

If the answer does not satisfy you, then there is always google.

What does a high level language do that a low level language cant?

That question is inverted. Low level languages can do a lot of things that high level languages cannot do, not the other way around. High level languages allow you to easily do the things that you usually do often and/or produce complex applications with very few lines of code. But, fundamentally, high level languages are detached from the hardware, are sandboxed into virtual machines / interpreters / renderers (e.g., browsers), are domain-specific (e.g., SQL, Matlab, etc.), and hide away many implementation details and does not grant the programmer access to them. So, there are tons of limitations associated with high level languages as a result of that, but for 99% of the kind of work you do with those languages, that's not a problem. Most low level languages are virtually limitless when it comes to what you can do with them, however, they don't make your life easy when it comes …

iConqueror commented: thankyou +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

These code-completion or introspection tools that most IDEs have are essentially implementing the first half of a complete compiler (which we would call the "front-end" of the compiler). Think about it, a compiler needs to look at all your code and understand every piece of it, i.e., what every identifier means, what every function call refers to, what every class or type is and where it's defined, etc... And once it did that, it can (1) verify that you obey all the rules of the language, (2) optimize things down (i.e., cut down on the extra fat), and (3) generate the machine code for the final program. Well, typically, the IDEs will have a kind of reduced compiler that stops just before doing those three things, and therefore, leaving the IDE with a kind of complete in-memory representation of your code, i.e., a complete "understanding" of your code. This is typically in the form of an AST (Abstract Syntax Tree).

Traditionally, IDEs have developed their code introspection tools on their own, often starting with a simply parser that can highlight the syntax elements (keywords, types, variables, literal values, etc.), and then building it up to a full semantic analyser (that "understands" the code). However, recently, with the Clang project, people have, for once, built a compiler that is modular enough to allow it to be used only in parts, and some IDEs are using that instead (like XCode, and soon KDevelop too). So, there's your answer, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Problem solving skills is a combination of many things, and that's why it's difficult to pin down as one particular thing you need to learn.

There is definitely an analytical aspect to problem solving, which is sort of what Hiroshe is hinting at. This is the ability to extract, from a problem definition, the essential aspects of it and frame it in an appropriate way in your mind. This is where classical computer science knowledge is useful, i.e., knowing all those classic algorithms and kinds of problems (e.g., TSP, clustering, sorting, etc..) and how to analyse them.

But another major part of having problem solving skills is having a large bag of practical tricks or patterns. Because the reality is, most programming tasks don't involve you sitting down and "analysing" the problem at hand, they involve you pulling the appropriate trick out of your bag of tricks and apply it to the problem. In other words, it's mostly about "ah, this is this kind of a problem, I have a trick for that, here it is..". At first, as a programmer, you might find yourself analysing every problem to come up with the best solution for it, and then, it gradually transitions to having a bag of tricks big enough to quickly solve almost any day-to-day programming problem. And this is something you can only learn through experience and practice.

Yet another aspect of problems solving skills is the ability to anticipate road-blocks or "theory meets reality" issues. This is …

iConqueror commented: thankyou +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hi Glen, and welcome to Daniweb!

I'm also from Canada (Quebec). And we have the same last name (Persson), I presume you have some Swedish ancestry too? My father is Swedish.

glen.persson commented: Hi, yes my ADOPTED dad was from Denmark, born in Sweden. We lived in Dollard Des Ormeaux (spell) for four months in 2007. LOVED the nice people and wonderful bright grocery stores. Now we are in Alberta for the time , we spend winters in warm BC. Offline +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster
  1. Is there any advantage using procedural than oo?

Advantages of procedural programming mostly boil down to simplicity. For one, a typical procedural program is just a straight-forward sequence of function calls and statements, which, if done well, are very clear and with no hidden behaviors like constructors, stateful function calls, etc... Basically, when a problem is simple, a simple approach is better, naturally. I have seen far too often people construct elaborate OOP class hierarchies to solve trivial problems, in total overkill fashion.

Procedural programming is also simpler in terms of infrastructure, i.e., all the "under the hood" or "ground work" code that the compiler / interpreter has to construct to support the language features. Typical language features for OOP require mechanisms for dynamic dispatching (virtual functions), run-time type identification (to check down-casts or match exceptions, or do run-time reflection), garbage collection, extra indirections, forced pessimizing assumptions (e.g., aliasing), etc... These things cause the run-time performance penalties you pay when using OOP features, and they can also be impossible or very undesirable in certain contexts such as hard real-time systems, embedded systems, kernel code (operating system), etc.. In these contexts, the simplicity of the infrastructure is a big advantage, for several reasons, and therefore, zero-overhead languages are used almost exclusively, like C or C++ (where the OOP overhead is only present if you explicitly use the corresponding OOP features).

Finally, procedural programs are simpler to analyse for correctness. It can be very difficult to formally establish the correctness …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster
  1. When I was about 14, I think. Started in Visual Basic, but quickly moved on to Delphi, and to C++ a few years later. But used many other languages too along the way (C, Fortran, Python, Java, SQL, html/js/php, Matlab, etc.).
  2. C++ because of the endless possibilities from close-to-the-metal stuff to high-level sugar-coated coding, and never any undue compromises.
  3. No, I thought it was fun and intuitive from the start. And when it got "hard", I welcomed the challenge.
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It declares a function called myFunc which returns an int and takes two int parameters. This function has to also be a virtual member function of some class, because the =0 part at the end is the "pure-specifier" which means that the function does not have a definition (implementation) which is only allowed for virtual member functions, making them pure virtual functions. With pure virtual functions, you force the derived classes to provide an implementation for them, also making the base-class an abstract class, meaning that only derived class objects can be created. Abstract classes are sort of the C++ flavor of what is traditionally called "Interfaces" in the Java/C# OOP terminology.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

<M/>, I think that RikTelner was trying to put up Hitler as a pretty good contender for the dreaded title of most evil man in history, since the world war that he started ultimately led to at least 60 million deaths (excluding the Japanese side of the conflict).

The death toll of Stalin doesn't reach that high, even by the most enflated estimates. Stalin's death toll is probably closer to 20 million. There is no doubt though that Stalin is easily in the top 3 of most evil people in history, but I don't think you could put him above Hitler on that list.

And also, it helps to be reminded that things are always seen from a particular point of view, and there is usually an opposite point of view too.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It seems that all of a sudden, some threads have 50 or more tags (e.g., the C++ forum). They seem to have just appeared, and take up a significant space in the listing (6 or 7 rows of tags below each post). Shouldn't there be some limit or something?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you are talking about internet ports, then you can just refer to the first sentence of the wiki page: "In computer networking, a port is an application-specific or process-specific software construct". The important words are in bold. Internet ports are purely a matter of software, not hardware. Modems, network adaptors or routers are not physically switching between different "ports" (e.g., frequencies).

When data is sent over a network (and by extension, the internet), it's bundled into "packets" which are analoguous to traditional mail letters, with a to and from address (IP address). The format of the packets is hierarchial (from low-level to high-level). It first has a IP header, followed by a transport-layer header (such as the TCP header), and then the application-specific data follows. As you can see in those links, the IP header contains the IP addresses, and the transport-layer header contains the port number. In other words, these are logical ports, constructed and handled by software, not hardware.

There are 65k ports because both UDP and TCP headers allocate 2 bytes of space for the port number, effectively limiting them to about 65k values.

When it comes to allowing multiple communications through the same media (wire, frequency, etc..), this requires a channel access method or multiplexing. For example, cell phones using CDMA. Wifi mostly uses OFDM. But these are hardware-specific details. Generally, the standards that regulate different kinds of communication mediums (radio, TV, DSL, Wifi, etc.) …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, from the world-bank census's, the empirical number today seems to put most "normal" countries (i.e., excluding island states and really large countries like Canada, Russia, etc.) between 0.1 and 0.5 hectares per capita, which is about 0.001 to 0.005 km2 or 1 to 5 thousand square meters. So, that's basically a patch of land between 30x30 meters to 70x70 meters. Also note that many countries on the higher end of the spectrum export and/or spoil a lot of their food, while countries on the lower end import food, ration it more, and/or starve.

If you look at the world data, you have about 7 billion people and about 14 million km2 of arable land, giving a similar ball-park figure of about 2000 square meters (about 45x45 meters) per person.

But, of course, there are many caveats to these figures.

For one, it doesn't take into account sources of food like fishing, hunting, raising animals (needs land for grazing, and that land is not considered "arable" because you're not planting crops on it), or sea-based cultures ("fish farming"). In other words, it is definitely not a measure of "how much land would I need to survive".

Also, modern agricultural practices (i.e., the "green revolution") is largely responsible for most of the dramatic reduction of these figures over the years. In other words, even if you had a pretty good estimate of how much land / resources is needed to feed one person, within our modern agriculture system, it would not …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Planes are represented in the general linear form. This form can be understood as a normal vector n = (a,b,c) and a distance d between the plane and the origin, along the normal vector. So, for any point (x,y,z), you can compute the following:

a*x + b*y + c*z + d

which will be 0 if the point is on the plane, and it will be positive or negative, depending on whether the point is above or below the plane. This is the operation that DX calls "dot product of a plane and a vector". It is essentially the dot product of the vector with the normal of the plane, and then, offsetted by the distance of the plane to the origin, such that you get these negative (below), zero (on plane) and positive (above) values.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This line declares a variable called pNf which is an auto-pointer to a Notification object, and it is initialized to _queue.waitDequeueNotification.

The AutoPtr is a class template. And the Notification class is it's template argument, making AutoPtr<Notification> an auto-pointer to a notification object. I believe that it is essentially equivalent to the standard unique-pointer.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you print out all the class sizes, things will become a bit clearer. If I run the following:

#include<iostream>
using namespace std;

class Base {};

class D1 : public Base {};

class D2 : virtual public Base {};

class DD : public D1, public D2 {};

int main()
{
    cout<<sizeof(void*)<<endl;
    cout<<sizeof(Base)<<endl;
    cout<<sizeof(D1)<<endl;
    cout<<sizeof(D2)<<endl;
    cout<<sizeof(DD)<<endl;
    return 0;
}

I get as output:

4
1
1
4
8

So, for a pointer size of 4, the Base class has a trivial size of 1 (because the standard says it cannot be 0), the D1 class has the same size as Base because that's all it contains and there is no virtual inheritance there. Then, the D2 class has the size of 4 because it needs to contain a pointer to the virtual base class, and that virtual base class' storage is absorbed in the D2 class (empty base-class optimization). And finally, the DD class has a size of 8 because it is composed of a subobject of class D1 and a subobject of class D2. The natural memory alignment of the platform is 32bit (4 bytes), and therefore, the compiler will optimize the memory layout such that variables (data members, incl. virtual pointers) fall on 4 byte alignment boundaries. So, the first 4 bytes of memory are used by the D1 subobject, where only really 1 byte is needed while the other 3 bytes are padding (unused bytes), and the last 4 bytes are used …

tapananand commented: Awesome Answer!! Thanks a lot!! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Since robotic motion planning is, generally-speaking, the topic of my PhD thesis, I can tell you quite a bit about this, probably far more than you really want to hear. I'll just try to give you a quick overview, with some key terms and places to start looking. If you really want precise information and recommendations, you are going to have to explain in more details what your problem is really about (objectives, constraints, environment features, etc..).

If you want to do motion planning for a lawn mower around obstacles, then there are a ton of options, which mainly depend on what kind of sensors you have (or expect to have). If you don't have any real capabilities to do mapping of the environment and doing localization of the robot, then there isn't much you can do except what the rumba-like robots do, which is to go in a straight line, bump in an obstacle, turn around to some random direction, and repeat, it's just the random nature of that algorithm that guarantees full coverage of the space with high probability after some time.

If you have local information on obstacles, such as lidar scans or some vision system, then you can consider local planning strategies, such as potential fields (see here).

To fulfill your objectives (mow the lawn, with some pattern), you might want to use something like a moving target chasing method, for example, many people use missile guidance algorithms for this kind of stuff. These …