mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I have been fine-tuning my recursive inheritance to the point that I can simply define the class I want to insert in the inheritance chain simply like this:

    template <typename...T> 
    struct policy_name: T...{
     // code here
    };

But... that's not a recursive inheritance solution. That's the multiple inheritance solution. There must be some kind of a confusion here.

What I would idealy like is simply use existing structures and type:

 inherit_recursively < existing_struct_1, existing_struct_2 /*,etc*/ > my_policies

where existing_struct_1 are plain vanilla structure such as:

 struct my_policy{
      // code
 };

Well, assuming you are aiming for a multiple inheritance version, then, you can use the technique I demonstrated a few posts back. It went like this:

template <typename... P> struct policies;

template <typename P, typename... T> 
struct policies<P, T...> : public P, 
                           public policies<T...> { };

template <typename P> struct policies<P> : public P {};
struct policy12 {
    // code...
}

// repeat for all desired policies with appropriate code

// and use like this:
policies<policy12, policy24 /*etc...*></policy12> my_chosen_policies;

That gives you exactly that effect, except that it doesn't implement the recursive inheritance scheme.

For a semi-recursive version, this might be the best approximation:

struct NullTerminatingTrait {
  template <typename... Tail>
  struct bind { };
};

template <typename... Args>
struct AnimalTraits;

template <>
struct AnimalTraits<> { };

template <typename T, typename... Tail>
struct AnimalTraits<T, Tail...> : T::template bind< Tail..., NullTerminatingTrait > { }; …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Actually they are very real indeed. It is part of the core of what I am writing now. Were the things you wanted to point out what followed (mostly the nested EBCO) or is there still another useful trick in your bag ?

No, the things I was referring to were not the things I mentioned afterwards. So, here they are.

Firstly, the idea of not keeping track of the size of the list is just really quite terrible, for these reasons:

  • That information comes essentially for free, i.e., a simple increment on insertions, and decrement on removals.
  • The memory cost of storing the size value is a constant cost (O(1)) on the container's size, which is already the size of two pointers (head, tail) and of the state of the allocator (if any), adding an integer to that is not a big deal (also, as long as it's within a single cache line (usually 64 bytes), size doesn't really matter). The only case where that memory overhead would matter is if you had a large number of small lists, in which case, you would never use a linked-list in the first place.
  • Finding the size of a linked-list through traversal is so terribly inefficient (it thrashes the cache) that you would be better off disabling any size-query functions if the size is not being tracked.
  • If you are using an ID instead of a pointer for the "next" node, then your allocator (pool) would have to …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1) There is a discussion on interfaces.

The technique (any special name for it?) you are proposing is the way to go

This technique is generally called type erasure. In this context, you could also call it "non-intrusive run-time polymorphism". A nice demo of that is "Inheritance Is The Base Class of Evil (it starts simple, but ramps up beautifully).

you seem to think that multiple inheritance is not a good way for multiple interface

Multiple interfaces are not a good idea to begin with (caveat: in that statement, I'm not including interfaces that refine each other (e.g., LivingBeing -> Animal -> Mammal -> Canine)). And when you do have to implement multiple interfaces in a single class, multiple inheritance is a really problematic thing to deal with, it's too intrusive (i.e. "locks up" the design too much, and you always have to dread that diamond).

In effect, your technique trades memory for an extremely minimal decrease in execution speed (creation of wrappers)

The wrappers do not, in general, impose any overhead. It's all washed away by the compiler. The wrapper just tells the compiler "do this dispatching dynamically", and it does so. The only overhead that will remain is what is necessary to accomplish the dynamic dispatching (object indirection and a virtual table lookup), which is what you would get anyways if you had virtual functions directly in your original class (i.e., that's just the price of dynamic dispatching).

The only …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

6) A great use is not for policies (other than vs2013 repair) but for multiple virtual interfaces that I can simply chain to avoid completely the cost on structure size! That (very useful) use remain even if I switch compiler.

I'm sorry, but I fail to understand how this could be useful at all. Let me setup a simple example scenario, let's say we are writing a library to represent different kinds of animals. Then, we could have a number of "policy" classes (or virtual interfaces, or whatever you want to call them) for different actions that animals do:

struct Walker {
  virtual void walk();
};

struct Runner {
  virtual void run();
};

struct Crawler {
  virtual void crawl();
};

struct Eater {
  virtual void eat();
};

struct TailWagger {
  virtual void wagTail();
};

// ...

So, under normal (OOP) circumstances, you could do this:

struct Dog : Walker, Runner, Eater, TailWagger {
  // ... dog-specific code (if any) here ...
};

And then, you could have Dog objects act polymorphically inside functions like this:

void wagTailThreeTimes(TailWagger& tw) {
  tw.wagTail();
  tw.wagTail();
  tw.wagTail();
};

int main() {
  Dog d;
  wagTailThreeTimes(d);
};

Now, using your solution for a recursive subclassing, you could re-implement the above using an approach like this (or a variant of it, which all boil down to the same thing):

struct NullTerminatingTrait {
  template <typename... Tail>
  struct bind { };
};

template <typename... Args>
struct AnimalTraits;

template <>
struct AnimalTraits<> …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It has a major useful feature: if the subclasses have virtual function you only get a single vtable!

That's a good point. I didn't think of that. To have a single vtable pointer you need a single inheritance chain.

So is there a way where we could benefit both from the virtual fct memory shrinkage AND absence of repetition of code?

To be honest, I'm wondering why you would want policy classes to have virtual functions in the first place. I can hardly think of a reason why that would be useful, and even less so within your original solution, since you can't use the policies as polymorphic base-classes anyways (due to the varying tail-end of the inheritance chain). I think you need to give me a use case for this, because it makes no sense to me otherwise.

Given that this requires a single inheritance chain, it's pretty hard to get unique instantiations for each policy, since they need to be inserted somewhere in that chain. Therefore, each policy will always depend on the tail-end of that chain (that it inherits from).

So, I don't think you can really have it both ways.

However, you might be able to avoid virtual functions by implementing the dynamic polymorphism externally to the policy classes. My tutorial on this subject might enlighten you.

And also, is there a way to get EBCO with vs2013?

Ask Microsoft. But don't get your hopes up, they've ignored …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Neat piece of code!

However, there is one major problem. The problem is that each individual policy (where you have the code) will be instantiated for every unique combination of P... packs. For example, if you have these two instances in your code:

policies<policy12, policy24> my_policies_1;
policies<policy24, policy12> my_policies_2;

you will have the following template instantiations generated by the compiler:

struct policies<policy12, policy24> : public policies<policy24> {
  // code for policy12
};

struct policies<policy24> : public policies<> {
  // code for policy24
};

struct policies<policy24, policy12> : public policies<policy12> {
  // code for policy24
};

struct policies<policy12> : public policies<> {
  // code for policy12
};

You see, the problem is that, here you have repeating instantiations of the same code, for no reason (i.e., the repeating code is not instantiated differently at all). This is wasteful, and will lead to code-bloat and longer compilation times. This is the real tricky part with using templates in this way, you have to carefully analyse the generation of instantiations.

Here is a way to fix that problem:

template <int... P> struct policies;
template <int P> struct policy;

template <int H, int... T> 
struct policies<H, T...> : public policy<H>, 
                           public policies<T...> { };

template <int H> struct policies<H> : public policy<H> {};

enum { policy12, policy24, policy56 /*etc..*/ };

template <>
struct policy<policy12> {
    // code...
}

// repeat for all desired policies with appropriate code

// and use like this:

policies<policy12, policy24 /*etc...*/> my_chosen_policies;

The above …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Nice article!

I think we are really starting to see a turning point these days. A lot of free software is now catching up to commercial software, or at least, close enough that it becomes hard to justify the cost of buying it.

In my opinion, for amateurish work on almost anything, there is an adequate FOSS solution out there. Just like LibreOffice vs MS Office, unless the office suite is your bread and butter, LibreOffice is probably going to be perfectly adequate (light work, occasional use, etc.).

But what's been happening now is that a lot of FOSS software is catching up, even in professional / engineering fields. Like many game development efforts that rely on Blender instead of some other (expensive) "professional" 3D modeling software. Like the plethora of electronics design software (EDA/eCAD) that mostly rivals commercial solutions. Like GIMP that rivals Photoshop. Like FreeCAD that is starting to look a lot like SolidWorks. Like Code-Aster / Code-Saturn / Salome that hands-down beats most of the very expensive FEA / CFD software packages.

I'm just wondering... what does that entail for these commercial software companies and their employees?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Did you try the mingw32 package?

You can install it with:

$ sudo apt-get install mingw32 mingw32-binutils mingw32-runtime

or whatever the equivalent packages are for Kali Linux (you might have to manually download the .deb files for these, and install them with $ sudo dpkg --install name_of_package.deb).

This will make it easier because MinGW comes with the standard Windows headers and libraries, and you won't have to fiddle around too much with the include-paths and all that. See compilation instructions here.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hello and Welcome!... btw, you have one heck of a name to be stepping into the world of IT!

dennis.ritchie commented: thank you +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Smart pointers are fine, but unless they are combined with reference-counting life-cycle management, are of limited use. I'm not that familiar with C++11's shared pointers, so it looks like I have some additional research to do. :-)

Yeah, you probably should. ;) Shared-pointers (shared_ptr / weak_ptr) are surprisingly robust and dependable. They are indeed reference-counted pointers. Moreover, the reference-counting is thread-safe (via atomic operations). As far as I'm concerned, it renders any other kind of reference-counting scheme deprecated, i.e., there's just no need, if you were to write a completely new library or application, to create another reference-counting scheme. And when you account for all the situations when unique ownership is appropriate (e.g., RAII-style data members or base-class, or unique_ptr pointers), and all the situations when shared ownership is needed (which are rare, actually), there is very little left (or none?) that would justify a full-blown garbage collector (i.e., 99.9% of the time, using a garbage collector is like killing a fly with a sledgehammer).

Also, the proof is in the pudding. Most of the fairly serious libraries that I have ever looked at generally use either an intrusive reference-counting scheme (for older libraries, mostly), or they rely on shared_ptr (or a similar variant, such as QSharedPointer in Qt, or IntrusiveRefCntPtr in LLVM), or, they don't really use shared ownership at all (and often, you don't have to, because software often naturally writes itself without it). Garbage collector libraries for C++ have been available for decades, but I …

rubberman commented: Thanks Mike. I haven't studied the C++11 standard to any extent. Time for some homework! :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

In any case, throw() specifies that it may throw ANY exception.

That's wrong. Maybe you forgot the negative in that statement, it should say: "throw() specifies that it may NOT throw ANY exception."

Here is the whole story. Exceptions are standardized in the C++ ISO Standard document. Their behavior is fully specified and compilers mostly comply to that specification, and have done so for a couple of decades now (at the very least, since 2000 or so). The behavior of exceptions is specified in terms of how the stack should be unwound following the throwing of an exception, what should happen in special cases (e.g., calls std::terminate() if an exception is thrown during the stack unwinding (e.g., from a destructor)), and how the appropriate "catch" statement is selected. That's all the "observable" behavior that is specified.

Note, however, that the standard says nothing about the implementation of those behaviors, i.e., compiler-vendors are free to implement those mechanisms any way they see fit, as long as it behaves correctly. However, most compilers comply with the Intel Itanium C++ ABI standard which has a detailed and standardized description of the so-called "zero cost exception handling" implementation, see here. In fact, the only modern compiler that does not implement and comply to the Itanium ABI standard is the Microsoft compiler (MSVC), which is, of course, the last compiler you would use if you care about standardization or reliability at all (i.e., MSVC is not really a production-quality compiler).

So, …

rubberman commented: I stand/sit corrected! Thanks Mike. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't know that you can find out what the password for root is, unless you have it already and can thus login as root. There would be no point in having a password if you can find out what it is without having the proper credentials.

However, one common issue (with beginners) is that they try to issue a su command (which temporarily switches to root user (super-user) within the current terminal) and it doesn't work because they don't know the root password. Normally, if you have super-user privileges on your account, you can use the sudo command to run a particular command under super-user (e.g., $ sudo yum install ..). You can also use that command to do the su command, i.e., you can do $ sudo su, which will require your user password (not the root password), and will grant you super-user status (root). After that, if you want to change the root password (which is randomly generated upon installation), you can use the passwd command.

You can also recover from having forgotten the root password by booting into single-user mode. See instructions here.

But if your user account is not a "sudoer" (meaning you can do sudo from your user account) or you don't have physical access to the computer, then it means that you do not have the necessary credentials to be a super-user on the system. And at that point, trying to get the root password would be tentamount to hacking the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are alternatives to google if you want more unbiased results, in particular, you can use DuckDuckGo. Here is a comparison of results for "php programming", DDG vs Google. As you can see, you far less ads or general "preferred" websites, and instead, more diverse and relevant results.

Google is largely turning into a man-in-the-middle for the internet as a whole, and that's a huge problem. On the one side, you have all the people depending on google to allow them to find stuff on the internet, and on the other side, you have tons of businesses that rely almost exclusively on being easy to find from google in order to be able to get customers. That is a huge position of strength for google, meaning that they can make or break any business, and thus, they can extort everyone. They used to just sell ads on their sites. Now, they can extort businesses for their right to be visible on the internet. And that's why their search results are so biased now.

As far as filtering is concerned, I don't of any good tool that exists for that. It is quite difficult to "filter" searches without actually creating your own search engine, because it's really not that easy to discriminate between different categories of content (ads, commercial, educational, etc.) without doing a lot of indexing of your own. I would suggest you look into the alternatives to google, many of which actually index …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A platform to tell the world that you just went to the toilet, and if yo're lucky you can post a photo of this amazing event.

That gives me an idea of a pretty useful app. How about an app that can analyse a picture of your stool and tell you if you are healthy, if you ate too much or too little of something (e.g., fiber, spices, etc.), if you drink enough water, or if you should go to the doctors right away, because experts can know a lot of stuff like that just by looking at the stool. That would really be the apogee of the "app madness", taking pictures of your own shit!

Reverend Jim commented: Great idea. An AI app you can call Brains For Sh!t. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Whats new in the software development world, last I heard wechat and instargram where the next big things

Things like wechat and instagram have very little to do with software development. Saying that wechat and instagram are the next big thing in software development is like saying that tablets are the next big thing in the molding of plastics. Molding plastic for one thing is as easy as for another, and new products are rarely revolutionary in the techniques used.

It would be more accurate to say that these things are the next big thing in social media, or internet marketing, or in the mobile markets. But from a technical "software development" point of view, these things are trivial and are not pushing any boundaries, as far as I'm aware. So, be sure to focus on the correct terms. If you want to know what's the next "app bubble", then that's one thing, but software development is something else.

From looking at current job postings and stuff, clearly, the current big thing in software development is data mining and predictive analytics. Also, computer vision is booming (e.g., kinect on steroids, facial recognition, vision-guided self-driving cars or robots, etc.). And, of course, more distributed software paradigms (e.g. cloud stuff, more inter-connected smart devices (in home, car, etc.), etc.). I think these are the current and next big challenges in software development.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, as you probably know, the author (Andrew Koenig) of that book used to be quite active here on Daniweb, see his profile. I would suggest you ask him directly. And as far as buy vs steal, you could just get the unofficial ebook, and wire (e.g., PayPal) some money to Mr. Koenig or his wife.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I beg to differ, CentOS is a very well-established distro of Linux. It is one of the main free alternatives to the big names in server distributions of Linux (RHEL and SLES). If you want a stable server-grade Linux distribution for free (no support plan costs), then CentOS is really good option, so are the up-stream distributions like Fedora (ahead of RHEL) and OpenSUSE (ahead of SLES).

I need to use apt-get and ipkg in my CentOS.

I don't understand why you want to install a Debian package system on a Red Hat system. What's wrong with the Red Hat package system (rpm and yum)? The apt-get command is equivalent to yum, but the latter is for distros that use Debian packages (Debian, Ubuntu, Mint, etc..), while the former is for distros that use Red Hat packages (Fedora, RHEL, CentOS, SLES, OpenSUSE, etc..). Similarly with ipkg / dpkg versus rpm.

Is there a really good reason why you want to rip out the Red Hat package manager from your OS, and transplant the Debian package manager instead? I don't think this is really possible without some crazy amount of work.

And what are you going to do once you have installed Debian package managers, which repository will you connect to? Debian repos? Are you gonna translate your own packages (take rpm packages, convert them to deb packages, upload them to a private repository, and the install them with apt-get)? Seems like a lot of trouble.

If …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Using a BigInt library is one solution.

Also, what are those digits?

If this is just a very big number, then you might just use a floating-point type instead (float or double) which don't really have an upper-limit (well, they do, but it's like 1e308), and instead, have a limited-precision (about 18 significant digits).

If the digits are actually something like a serial number, social security number, bank account number or anything like that, then you most likely don't need to be doing any math operations on those numbers (e.g., you can't add two social security numbers!). So, in this case, you don't really need to store the number as an actual number (integer), but instead, you could just store it as a string (text), or some other ad hoc representation (like 5 integers, one for each chunk of 5 digits). That's what you normally do for long numbers that are not mathematical in nature. This is because the BigInt classes are usually overkill for this situation because of all the math operations they bring into the picture (that you won't use).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

is it possible for me to write my own class deriving from iostream that would deal with cin and cout, but act as one stream.

Yes and no. To do this, you would have to violate the interface of iostream. The problem here is that cin and cout are not really connected with each other (except for synchronization), they are two distinct streams. Whatever you write onto cout, you will never be able to read out of cin. They are two distinct streams, i.e., like two distinct files. This means that if you were to create an iostream that takes all inputs from cin and puts all output to cout, you would not get the same behavior as you get from a iostream like fstream or stringstream, which both operate on a single buffer. For example, if you have a stringstream, you can write stuff to it, and then read it back out afterwards, and the same goes for fstream. The point is, this behavior is the expected behavior you get when using an iostream, and you will never be able to get that behavior out of standard streams (stdin, stdout) because they are separate streams.

You could do some trickery to merge the two standard streams, which is possible through OS-specific functions. For example, you could pipe stdout into stdin, and then, you would get the correct iostream behavior. However, with that, you lose the actual console input completely (and probably the console output too), and at …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, it's true that this may not be appropriate for whatever is Labdabeta's actual problem in its broader context. And, like you, I would invite him to give us more information on that.

However, this is far from being an unusual solution. The code I posted above, or a variation thereof, is extremely common. I have written such code in many places, and I have seen it in many other places too. And there is nothing unusual about the situation Labdabeta is in.

I would say that the only thing that is unusual here is that Labdabeta seems to require a bi-directional (I/O) stream. That is very rarely needed. A true bi-directional stream, meaning that you can go back and forth between reading and writing on the same data, is rarely needed, and never a good idea either (typically, do most operations in memory, and then, dump to a stream, or vice versa, take all from the stream and then operate in memory).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

However cout uses ostream instead of istream and therefore its common ancestor with fstreams would be ios.

Well, naturally, cout uses ostream because cout is an output stream, not an input stream. If you want an input stream, you have to use cin. And fstream derives from both istream and ostream, btw.

Generally, if you want to have a stream object that is either a file stream or a standard (console) stream, or a string stream, for that matter, the general solution is to have a pointer to a ostream or istream, depending on the direction you need.

You cannot, however, have a generic bi-directional stream (iostream) because only some stream can allow that (e.g., files and strings), but not standard streams (cin / cout). And this is the case whether you use C++ streams or C-style IO functions, because the standard streams behave the same in both cases. The only difference is that in C++ you are forbidden at compile-time from attempting to do input operations on cout, or vice versa. With the C functions, doing so is permitted by the compiler, but will result in an error at run-time.

So, if you need iostream, then you can use that too, but only for things that actually support this bi-directionality, which excludes the standard streams (and any other similar streams, like internet socket streams).

Here is how you would typically handle having a generic stream:

class any_ostream {
  private:
    std::ostream* out_str;
    bool should_delete;

    // non-copyable:
    any_ostream(const …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Clang is great. Not perfect, not problem-free, not super-clean, not without its challenges, but still great.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Have you had much experience with it?

Don't pay attention to what I said, I was just venting some repressed anger. I've just started to dig into clang's code, and yesterday, I spent several hours chasing down a resource-leak issue through the maze that is the front-end architecture of clang, just to find that they intentionally leak most of their high-level data structures, and even more aggregious, they hide the leaks so that memory profilers will give them a "clean bill of health". Basically, they constructed such a messy architecture that they don't know how to deconstruct it cleanly, and so instead, they just leak it all, including file handles and the whole shabang. Very nasty stuff. As soon as you depart from RAII in C++, you set yourself up for failure in the long-run, IMHO.

There are also tons of nasty hacks that I had never even dreamed of in my worst nightmares, and with no obvious reason for using them. And I have barely seen a small cross-section of the code so far.

I'm sure that other compilers are even more messy under-the-hood. Clang is indeed nicer and faster, and more easily adaptable due to being coded in C++. However, that doesn't guarantee that things won't get messy, and they already have. And, it is still very much a work in progress (the code is riddled with "FIXME" and "TODO" stuff everywhere).

But again, I'm just venting here.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Regarding the speed cost, it may add up because if you travel a full directory tree (with > 1M files entry) that is memory buffered, going from one item to another is just a jump of ptr but copying the structure probably becomes a time-critical event.

Uhmm.. I'm not sure what you mean here. The cost of that extra copy of the object is not going to become more expensive as the record becomes bigger. It might add up, but it doesn't become worse. And, it will always remain an insignificant fraction of the whole execution time.. because everything else adds up too.

(yes I know, it is more probably the memory access lag)

Well, that depends. The memory access lag (i.e., reading off the file list from the file-system on the hard-drive) might not necessarily be that bad because of caching (temporarily putting HDD memory on RAM for quick access), pre-fetching (anticipating up-coming reads), and other things that can streamline that access. Of course, reading off a list of files on the HDD is always going to be very slow in terms of memory access. But, even if that wasn't there as a bottle-neck, you would still have the context-switch overhead. The thing here is that reading off the file-system requires a temporary context-switch from user-space (where applications run) to kernel-space (where the OS runs), and that is expensive enough by itself to render the copying cost insignificant.

Consider this:
- Cost of copying …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Over the past 50 years, gun owners have been responsible for over $2 billion in wildlife conservation in the United States due to 10% tax on guns and ammo.

That's great! But the target is wrong. Statistically, one of the primary uses for firearms is to commit suicide. So, the tax on firearms should go towards funding suicide prevention programs and mental health treatments.

Fun fact:
The largest living organism in the world is a mushroom in Oregon. link

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Ok, so let me get this straight. Currently, your code ends up doing to following thing for each iteration:

  1. Fill in the data into a WIN32_FIND_DATA object (within your input iterator) via the FindNextFile function.
  2. Copy-construct a new WIN32_FIND_DATA object as part of "push_back()".

And you want to change it to be:

  1. Fill in the data into a new WIN32_FIND_DATA object (within the vector) via the FindNextFile function.

The problem with doing this is that there is really no way to avoid constructing the WIN32_FIND_DATA object before you can fill it with data from the FindNextFile function. That's by design of the FindNextFile function. The best you could do would be to default-construct a new WIN32_FIND_DATA object in the vector (and because it is a C struct, default-construct is trivial, meaning that it does nothing at all), and then fill it in. In other words, the most efficient version would be this (as a simple loop):

std::vector<WIN32_FIND_DATA> vwfd;
HANDLE handle;
while( /* still have files.. */ ) {
  vwfd.emplace_back();
  FindNextFile(handle, &vwfd.back());
}

As far as finding a way to coerce that into a back-inserter / input-iterator framework, I don't think it's worth the trouble. It could be possible, by using some kind of proxy class.... but really, you might just try to rely on NRVO (Named Return-Value Optimization), with something like this:

struct find_file_iterator {
    HANDLE handle;

    WIN32_FIND_DATA operator*() const { 
      WIN32_FIND_DATA wfd;
      FindNextFile(handle, &wfd);
      return wfd;
    }
    find_file_iterator& operator++() { return *this; }
    ... …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

No, there is in general no risk of opening up your computer to malware by writing your own software.

First of all, most malware is either packaged with specific applications (e.g., tainted freeware, or pre-installed crapware) or it exploits a vulnerability in a specific application or platform (e.g., MS Office, flash, .NET, etc.).

There are some vicious viruses that infect everything, but they are rare, and your home-made software is no more vulnerable to them than any other program (i.e., basically, the only protection against those are in anti-virus software or anti-viral recovery media).

Secondly, the things that are really dangerous from a malware perspective are applications that straddle across safety boundaries of the operating system. In an operating system, there are boundaries like between user-space and kernel-space, or between different privileges levels of users (guest, user, admin, super-user, user-groups, etc..). The key to most viruses and malware is to find a crack in those boundaries to try and move "up" from a normal application execution environment (e.g., low-privilege user-space) to a more powerful environment (e.g., super-user / admin, or running kernel-space code). This is because that is the environment in which you can truly do some damage or be able to permanently "hide" the existence of the malware / virus.

So, when people try to diffuse malware or viruses, they will look for vulnerabilities (or exploits) that will allow them to make that move. This means that they need to target applications, frameworks, protocols or OS APIs that provide …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When you have a compiled program, i.e., software, then that is just a chunk of memory containing machine codes (or instructions). So, as AD said, the thing that converts software (source) into binary form is the compiler. Once you have a binary executable program, the way that it gets executed on the processor is actually very simple (it has to be, it's just "dumb" electric circuits after all).

Each instruction is a sequence of 0s and 1s (bits), that is usually 32bits long (or more or less, depending on the architecture). Imagine that sequence of bits like a sequence of valve positions for a series of pipes bifurcations carrying water. If 0 is left and 1 is right, then you could have an instruction like 11001, which would mean to set: valve 1 = right, valve 2 = right, valve 3 = left, valve 4 = left, and valve 5 = right. That unique sequence of valve positions carries the water to a unique route through the pipes, to a unique destination. This is basically the way processors execute instructions, except that the "water" is made of electrons and the "valves" are transistors. That is probably the simplest way to picture it.

Complete instructions to the processor are usually composed of an instruction (e.g., "add", "subtract", "increment", etc.) and one or two operands that sit on registers, which are little storage units directly at the "entrance" of the processor. So, if you do an operation like "add R1, R2" (which …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Bush would still have won had there been a recount.

The problem with the 2000 Florida elections was not about the recount, but about the disenfranchisement of tens of thousands of primarily african-americans in Florida. This "error" won the election of Bush. And this "error" came from a private company who have since settled a lawsuit and effectively admitting to being responsible for the massive disenfranchisement that undoubtedly won the election for Bush.

And that issue was part of a coordinated afront on black votes, most likely coordinated by Bush's brother Jeb. Hard to prove, but very obviously so.

But this is 15 year-old news and not worth repeating.

Reminding people of the actions of criminals who conspired to overtly violate the US Constitution and take a piss on Democracy and everything Americans stand for.... yeah, that is worth repeating.

Reverend Jim commented: Very well put. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The way I see it, if we can make online banking systems that we can trust (presumably), then we should be able to make an online voting system that we can trust.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

it would be netcat, not netcat.exe, although changing the name wouldn't harm anything, or you could make sure that the Makefile created it as netcat.exe.

I don't quite understand that sentence, but there is at least one interpretation of it that is very wrong. Linux executables are not the same as Windows executables, no matter what the extension is (with or without .exe). Linux uses ELF format (in the Unix/BSD tradition), while Windows uses PE format. These are completely different formats and unless you run under an emulation layer (like Wine in Linux, or Cygwin in Windows), there is no way to run one format in the other OS, AFAIK. Changing the extension does not do anything.

MinGW cross compiler under linux

The main problem really is to find a way to tell GCC to generate Windows code. And by Windows code, I really mean two things: it needs to use Windows libraries; and it needs to be packaged in PE format (.dll, .exe, etc.). The executable code itself is just dependent on the processor not the OS. I have very limited experience with setting up a cross-compilation environment. I just know that it's common for embedded systems and things like that where you can't really compile stuff on the target platform (it's too small), but in general, those are still Linux-to-Linux cross-compilations, just with a target different architecture and linking with specific libraries.

I would imagine that cross-compiling anything serious for Windows but under Linux …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Another good thing to do is to block ports in your router. There are tons of ports that are not useful for anything but exploits (e.g., ports that are used for some old or obscure feature that nobody really uses except that these "features" contain holes that hackers can use to get in). There are reports out there that detail all the ports that you should block... NSA has a number of public reports of that nature that you can follow. Router or computer firewalls generally don't block those ports because they are mostly focused on blocking torrents and other p2p protocols, they don't block "official" protocols, which is where real hacks come from.

It's also important to understand that 99.9% of "hacking" uses the "shotgun approach". The idea here is that they just diffuse their malicious software all over the place and catch the most vulnerable people. As long as there are enough vulnerable people to make it worth-while, they won't try a more aggressive attack. In other words, why try to attack some random guy who runs a secured version of Linux behind a uber-paranoid port-blocking router when you can just attack the grandma who thinks that the anti-virus she installed 3 years ago and never updated / renewed is keeping her safe, as she clicks on any random thing that pops up on her screen.

And at the end of the day, whatever the hacker is doing, the data must come out of your computer onto your …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As rubberman says, in Linux, drivers are implemented as kernel modules. Most drivers are open-source. Basically, the developement community didn't wait for hardware manufacturers to create Linux drivers for their hardware, because they could very well have waited forever, and instead, they developed open-source drivers. For the most part, these drivers are written by reverse-engineering the Windows drivers, and making "lowest common denominators" drivers (manufacturers usually re-use the same basic set of commands between models, so, a "basic" driver will work for a whole series of products).

So, this means that most drivers in Linux are open-source and can therefore be packaged with the Linux distribution (Ubuntu, Fedora, Debian, RHEL, etc.) installation and package-repository. This means that as you install Linux (most popular distributions anyways), it will automatically check your hardware, automatically download / install / enable all the appropriate drivers, and you will probably not have to do anything after that. It is possible that a few peripheral things are not working (e.g., wireless, microphone, webcam, etc.) or not working as well as they could (e.g., graphics card, etc.). If that's the case, you can check if there are proprietary drivers for those specific things (and installing them is easy, and there are usually simple instructions). If there are any issues after that, well, you know where to ask for help ;)

RikTelner commented: Thank you. I sure know where to ask help :). +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So, from doing tests with Dani in the chat, it appears that the culprit for this issue is the Google+ button. For instance, even this tutorial page about the Google+ button leaks memory at every refresh (for me). So, this appears to be a bug between google+ and google-chrome... ;)

For the moment, the bug can be fixed by removing the google button, which comes from this file: https://apis.google.com/js/plusone.js

If, in your browser, you black-list it or something, I guess it would fix the leak. Any one has a easy suggestion on how to do that (I'm not much of a javascript guy)?

I might file a bug report to google.. if I care enough to do so. But it seems, the problem with that is that it isn't very consistent (easy to reproduce) between platforms (even with the same chrome version).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We are going to use a game building kit/platform that works on mac, windows and linux. So first, I would like to know if someone can tell me of the platform he might be talking about.

There are a few possibilities, depending on the scope. As Alex mentioned, he could be talking about Unity3D, which is a cross-platform 3D game engine that is kind of popular for entry-level home-made 3D games. But frankly, I doubt that in your first year of CS you would be asked to do a 3D game, even a simple one. It just seems a bit over the top, and also, you would be spending a lot of time just working out the kinks of 3D graphics and modeling, and not much time coding.

It is possible that he is referring to a simpler 2D platform. Something like SDL, or maybe even flash (which is basically just interactive animations, really).

What programming languages have you been focusing on? Because that would be quite telling of which platform your prof has in mind.

I imagined if the game can support the three platforms then it might not be too graphic intensive

The fact that it supports all three platforms has no bearing on the intensity of the graphics. In fact, Windows is the worst platform for graphics, and it's pretty good, so, there isn't much of a limit here.

and the fact that it is a 9 weeks.

That's …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It's a mathematical theory about the optimality of strategies in a game. A game is generally defined as a set of rules that determine the possible actions that each player can make and the rewards given to each player in a round of play (each player makes a move). John Nash is best known for the "Nash Equilibrium" which is a theorem that defines a set of conditions by which the strategy of each player is locally optimal given the strategies of the other players. More interesting, however, is that the global optimum (most rewards for all) might not be one of those Nash equilibrium points, from which you can prove that cooperation is always better than competition.

This theory has implications in economy, sociology, ethics, etc.. as it provides a mathematical framework to evaluate the optimality and stability (stable equilibriums) of a collective set of strategies. For example, the stock market can be seen as a game in which all players try to make the best investments (actions), and depending on each others investments, they get rewards (returns on investments). And so, being able analyse which moves are the best in that context is quite important (for the investors), and also, analyzing what should be the best collective investment strategies is quite important (for the society at large). In ethics, there are similar considerations, i.e., maximizing the well-being of everyone.

Now, if you think that this has much to do with computer games or things like that, then you …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The word-size of a computer architecture is just the number of bits that the computer can digest at one time (for each instruction). So, for an 8-bit processor, it just means that it can only perform operations on 8-bit numbers at a time. This does not mean that it cannot digest bigger numbers, it just means that if it needs to digest a bigger number, it must break up that number into 8-bit chunks and digest each one at a time.

If we take the analogy of eating food, then the limit on the amount of food you can put in your mouth at a time does not limit the total amount of food you can consume, it just means that it will take longer to eat a big plate of food it your bites are smaller.

The standard Unix representation of time (date) has always (AFAIK) been using a 32bit signed integer (and lately, using a 64bit integer) for the number of seconds since the epoch (1970). On 8-bit platforms, this would mean that in order to manipulate a date (e.g., adding a year to a date), the computer would have to add the two 32bit numbers by individually adding 8bit chunks of it.. meaning, four additions with carry (which would be 7 additions total). But the point is, it can still deal with larger numbers than 8-bit, it's just that it needs more work to do so.

If we take the analogy of doing additions like we did …

RikTelner commented: Mike saves world again :D. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First of all, in an unsigned byte, the maximum decimal is 255 (0xFF). And second, it's not because the word-size is only 8 bits (1 byte) that you cannot create numbers that are larger than that, it just means that if the numbers are larger than the word-size, you have to handle them word-by-word. For example, if you add two multi-word numbers, you just have to add the least-significant words from each number, and then keep the carry for the next word addition, and so on..

Think of it this way. You, as a human, when you were in elementary school, you could only represent a number between 0-9 through writing a digit down (i.e., that is your native "word-size"). But, you could still represent very large numbers and do many complicated operations with them, right? Well, it was the same for these computers. And it's still the same today for very large numbers that exceed the 32bit / 64bit word-sizes.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

okay so how many rep points are needed to grab a top level position of daniweb amigo??

Take the number of rep points that Ancient Dragon has, multiply that by 10%, and that pretty much means you are at top level.. i.e., if you can come up to AD's ankles (10%), then that's a major achievement here. ;)

Stuugie commented: nothing like you being a grown man and some dude calling you Mikey. I'm a Mike and that shit stopped 25 years ago, unless it's a friend calling me Mikey. +0
Mike_danvers commented: tit for tat +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Did you try to see what you could do with the instructions on Migrating your KMail/Kontact setup to a new distro? I think that just copying all those config files/folders from the old distro (Mint) to your new system should do the trick, at least, that's what it's intended for. But to be safe, you should back up your Kubuntu configs before overwriting them with the configs from your Mint drive.

All I could see is that you might have some issues with is if the versions of the KDE suite are very different between the distros, or if you use different user-name / accounts. In that case, you might have to manually edit those files to make the repairs. Usually, those kinds of config files are just simple text files with lots of fields and stuff (but maybe emails / contact-lists are encrypted). So, it is usually quite easy to open them up in a text editor and just modify them accordingly (e.g., just open a "fresh" Kubuntu config file and the corresponding Mint config file, and it should be quite obvious what you need to change to make the Mint config work on the Kubuntu system). But I don't think that this will be a problem.

Gribouillis commented: good help +14
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Linux can handle all of this very well (much better than Windows, in most cases). The real question is more about how optimal it will be. The first thing you have to understand is that one of the most important application domains (if not the most important) of Linux is for servers and mainframes, which are just really powerful computers. So, Linux is, in general, much ahead of the game when it comes to handling and efficiently exploiting hardware with powerful CPUs (with lots of cache, lots of cores), large hard-drives (several tera-bytes, RAID configurations, etc.), and ample system memory (dozens of Gb of RAM). So, on that end, you don't have to worry about "support" (as in, "will it work") but more about "optimality" (what will work best).

As long as you do some research to figure out which distribution would be the best for this kind of application, and make sure the kernel version is a good balance between up-stream (state-of-the-art) and stability, you should be good. I would look into distros that are closer to state-of-the-art and not too far from a parent / related professional server distribution.. the one that comes to mind is Fedora, but you need to look into that more carefully. You might also want to look into what kernel modules are important to enable for these types of "monster" machines, as some of these modules might not be enabled by default in "run-of-the-mill" desktop distributions.

The main problem is going to be …

RikTelner commented: You blew answer out of water. Since now, thou ist considered Linux wikipedia. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A colleague of mine was using Kinect in Linux not so long ago. I believe he used ROS's modules for it (ROS: Robot Operating System, which is comprehensive robotics library (mostly C++, with python bindings)). I think ROS uses the openni library for this. All I know is that it didn't take more than about 1 hour to have it running, so, it can't be that hard.

Also, I would expect that the OpenKinect project is also likely to work well.

with c# ?

That might be the sticky point. No one really cares about C#, and certainly not in Linux. I think the OpenKinect project has a C# wrapper, but it seems primitive or not very developed yet. Similarly, I don't think ROS supports any C# at all, or at least, it's very weak... even their Java support isn't great... there just aren't that many people who would do robotics with such inappropriate languages. So, you shouldn't hold too much hopes for a native C# solution to this. And in general, in the non-Windows world, when it comes to the programming language landscape, C# is pretty far down the list (in major part because Microsoft really doesn't want C# code to run anywhere else.. I mean, that's the only reason this language (and .NET) exists, don't you know?).

I can't imagine your company planning to use C# as the primary development language, especially if using Linux. So, you will have to get comfortable with a grown-up's language …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Let's just say we agree to a degree... ;)

Reverend Jim commented: I can go for that. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I must concur with deceptikon on this... Be careful about self-proclaiming yourself an authority in a matter that you clearly only have cursory knowledge of. I have looked at your top three entries (the character testing thing, the switch-statement thing, and the recursive linear search (yuk..)). They are all poorly chosen examples, and contain some serious problems.

First of all, if you want to "teach", you also have to "teach by example". You cannot put up example code that disregard most rules of code-clarity. You have to be extra careful about indentation, spacing, and exposition of the code in general. In that regard, this example code is quite aggregious:

#include<iostream>
using namespace std;
int main()
{
char grade; double gpa=0.0;
cout<<"Enter your Grade=  ";
cin>>grade;
switch(grade)
{
  case'A':
  case'a':
  gpa=4.0;
  cout<<"your GPA is "<<gpa;
  break;

    case'B':
    case'b':
    gpa=3.0;
    cout<<"your GPA is "<<gpa;
    break;

     case'C':
     case'c':
     gpa=2.0;
     cout<<"your GPA is "<<gpa;
     break;

      case'D':
      case'd':
      gpa=1.0;
      cout<<"your GPA is "<<gpa;
      break;

       case'F':
       case'f':
       gpa=0.0;
       cout<<"your GPA is "<<gpa;
       break;

    default:
    cout<<"invalid grade entered";
    break;
  }
return 0;
}

That just looks terrible, regardless of the context (professional or academic). Just the most minimal standards of clarity mandate that you should at least have something like this:

#include<iostream>

using namespace std;

int main()
{
  char grade; 

  cout << "Enter your Grade=  ";
  cin >> grade;

  double gpa=0.0;

  switch(grade)
  {
    case 'A':
    case 'a':
      gpa = 4.0;
      cout << "your GPA is " << gpa;
      break;
    case 'B':
    case 'b':
      gpa …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think the problem with this has to do with the definitiveness of the word "war". For example, if we say "No shit?!" (which is a contraction of "is what you're telling no shit?!"), the "no" is appropriate (as opposed to "not") because the word "shit" is indefinite (or abstract). In the opposite case, like if you say "by the lack of smell, I can say that the brown stain on the floor is not shit", you use "not" because of the concrete / definite use of the word. The ambiguity with the word "war" is that it can be both also. However, in the context of a verb like "make", the complement must be definite, i.e., you cannot "make" something abstract or indefinite. "Make love" is a concrete act, and so is "make war". And also, another way to see it is that "not war" is a contraction of "do not make war", where the "do make" is implied by the previous "make love", i.e., it is "do make love, do not make war" becoming "make love, not war". But in indefinite cases, you would use "no", such as saying "we don't want no war with you" (which is, in itself, an interesting structure).

ddanbe commented: Interesting! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

One minor thing, I would probably move this test:

if (numerator.sign()!=denominator.sign())//result is negative
    return -((-numerator)/denominator);

to the start of the function to avoid a double testing for the 0 conditions. Currently, if you have a negative, non-zero fraction, you first test both num and den for being zero, then, you see that the fraction is negative, make a recursive call in which you will, again, test both num and den for being zero. Simply moving the negativity test to the beginning will solve that minor inefficiency.

But, of course, the major thing is the following loop:

Int ret=0;
while (num>den)
{
    ret++;
    num-=den;
}

This is linear in "ret", meaning that you just repeatedly subtract den from num until you can't do it anymore. This seems terribly inefficient. I would recommend using a method that does it in O(log(N)) time instead of O(N). Here is a simple method to do it:

Int num = numerator.abs();
Int den = denominator.abs();
Int ret = 0;
Int mask = 1;
while( num > den ) {
    den <<= 1;
    mask <<= 1;
};
while( !(mask & 1) )
{
    den >>= 1;
    mask >>= 1;
    if( num > den ) {
        ret |= mask;
        num -= den;
    };
};

At least, this does the work in log(N) where N is the value of "ret" and log is base 2. If you didn't understand this, it is quite simple, you multiply "den" by 2 as many times as …

ddanbe commented: ONce again, showing deep knowledge +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Another open-source operating system you might want to look at if you are interested in this is the FreeRTOS project. This is a very small real-time operating system designed for embedded system (micro-controllers, i.e., very tiny computers on a single chip). This might be an easier introduction to the practical aspects of OS development, because the source code is only a few thousand lines of code (as opposed to Linux which has 15 million lines!).

But one thing is for sure, OS code is never pretty. It's a long and tedious sequence of bit-fiddling and book-keeping.

how is it Operating Systems are made?

Obviously, there is far too much here to explain it all, and the details are far beyond my own knowledge of the subject. But, essentially, operating systems are written like any other library or application, except that you have almost nothing to begin with. Without an operating system, you don't have file I/O, you don't have threads, you don't have peripherals of any kind, you don't have dynamic memory allocation, you don't have any protections (such as preventing access to the wrong memory), and so on... this means that every little task can become quite tedious, i.e., very "low-level". But, by definition, the code is kind of "simple", close to the metal.

Generally, the architecture of an operating system has many parts, one for each of the main "features". But the most important concept is that of kernel-space vs. user-space. An operating system …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

and then do a batch install of all the packages of your VM installation, and then, an rsync to retrieve all the home-folder contents.

I don't get this part. Could you extend?

Yeah, that was a bit too dense in Linux jargon. Let's say you use a Debian-based distro, then you would be using dpkg and apt-get to install software. On your VM, you can do this:

$ dpkg --get-selections > list.txt

which will produce a file "list.txt" that contains a list of all software packages installed on your system. Then, on the new system (fresh install), you can install all those packages by doing this:

$ sudo dpkg --clear-selections
$ sudo dpkg --set-selections < list.txt
$ sudo apt-get autoremove
$ sudo apt-get dselect-upgrade

So, that's the first part (the "batch-install of all packages of your VM").

The second part is quite simple. As said in the link I gave, you can convert your VM image into a raw disk image:

$ qemu-img convert your-vmware-disk.vmdk -O raw disk.img

and then, in the new system, you can mount the image to a folder:

$ sudo mkdir /media/VMimage
$ sudo mount -o loop /path/to/disk/image/disk.img /media/VMimage

And then, all you need to do is rsync the home folders:

$ rsync -vrtz /media/VMimage/home/username/ ~/

And that will make your home folder in your new system identical to the one on the virtual machine. If you need to sync any other folder, do so, …

RikTelner commented: Much better than all other rude, psycho Linux fans. Finally found someone who can explain things as normal and as plain as possible to someone of my knownledge of Linux (== null). +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You can certainly do it. There are a few ways, depending on what customizations you made and what the destination is (dual-boot, etc.). But it is certainly possible. For example, see the instructions given here.

If your customizations are limited to the list of installed packages and the contents of your "home" directory, then you can just make a new installation on your real HDD, and then do a batch install of all the packages of your VM installation, and then, an rsync to retrieve all the home-folder contents. That's the way I would typically do a backup and migrate thing for the usual scenario of "I want to migrate all my installed software and files".

But for a complete migration, just use the image-dumping techniques provided in the link I just provided.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Personally, I have a HD WD TV Live, which is kind of like Roku, but cheaper. It works perfectly for this kind of application. The box connects wirelessly to the router, can see either shared folders (samba) or UPnP media server (which I recommend, and is easy to set-up on your computer). It allows you to view all your media on the TV directly, and with the media server thing, you don't have to worry about decoding video and stuff because it always works (i.e., if the box itself does not have the required codecs, it will get the computer (server) to decode the video for it). I'm 100% happy with that product, and it's less than 100 bucks to buy, and no subscriptions. It also has internet capabilities (through your network), like for youtube and netflix, if you ever want to use that.