mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I can only learn from it!

Alright, you asked for it. Here's a revised version of your post (at least, how I would revise it):

"My first language is Dutch, and I don't like it when Dutch people make mistakes in Dutch on TV or in print. I have been here, on DaniWeb, for quite a while now and never encountered anyone who mocks my English. In fact, quite à propos to this discussion, I would like a member to correct me on grammatical or spelling mistakes in English."

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Linux can handle all of this very well (much better than Windows, in most cases). The real question is more about how optimal it will be. The first thing you have to understand is that one of the most important application domains (if not the most important) of Linux is for servers and mainframes, which are just really powerful computers. So, Linux is, in general, much ahead of the game when it comes to handling and efficiently exploiting hardware with powerful CPUs (with lots of cache, lots of cores), large hard-drives (several tera-bytes, RAID configurations, etc.), and ample system memory (dozens of Gb of RAM). So, on that end, you don't have to worry about "support" (as in, "will it work") but more about "optimality" (what will work best).

As long as you do some research to figure out which distribution would be the best for this kind of application, and make sure the kernel version is a good balance between up-stream (state-of-the-art) and stability, you should be good. I would look into distros that are closer to state-of-the-art and not too far from a parent / related professional server distribution.. the one that comes to mind is Fedora, but you need to look into that more carefully. You might also want to look into what kernel modules are important to enable for these types of "monster" machines, as some of these modules might not be enabled by default in "run-of-the-mill" desktop distributions.

The main problem is going to be …

RikTelner commented: You blew answer out of water. Since now, thou ist considered Linux wikipedia. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A colleague of mine was using Kinect in Linux not so long ago. I believe he used ROS's modules for it (ROS: Robot Operating System, which is comprehensive robotics library (mostly C++, with python bindings)). I think ROS uses the openni library for this. All I know is that it didn't take more than about 1 hour to have it running, so, it can't be that hard.

Also, I would expect that the OpenKinect project is also likely to work well.

with c# ?

That might be the sticky point. No one really cares about C#, and certainly not in Linux. I think the OpenKinect project has a C# wrapper, but it seems primitive or not very developed yet. Similarly, I don't think ROS supports any C# at all, or at least, it's very weak... even their Java support isn't great... there just aren't that many people who would do robotics with such inappropriate languages. So, you shouldn't hold too much hopes for a native C# solution to this. And in general, in the non-Windows world, when it comes to the programming language landscape, C# is pretty far down the list (in major part because Microsoft really doesn't want C# code to run anywhere else.. I mean, that's the only reason this language (and .NET) exists, don't you know?).

I can't imagine your company planning to use C# as the primary development language, especially if using Linux. So, you will have to get comfortable with a grown-up's language …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Let's just say we agree to a degree... ;)

Reverend Jim commented: I can go for that. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You got trapped RJ. When you said "either this is or is not shit", you introduced definitiveness into the sentence by using the definite pronoun "this", and because "shit" qualifies the thing that is referred to by "this", it is now a definite use of "shit". That's why "not" is appropriate here. But that does not carry over to the "no shit?" expression. What indefinite means is that "no" is appropriate wherever you could use any of the following: "not a", "not any", "not some", "not one", ... and so on.. you get the point. I know that this is hard to understand, it's another remnant of a time when English had a more complex grammar and where such distinctions were far more obvious.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well.. that's the double entendre, i.e., the sub-text. The text itself means "have sex instead of killing people", but the use of the word "war" which is usually indefinite (the "abstract" concept of war) and global in scope gives a sub-text to the whole expression. The globality of the term "war" imbues the "make love" with the same global scope, leading to the sub-text that says "let's all love each other, instead of making war with each other". That also creates an association of the very obvious truth that having sex is preferrable to killing people and the proposition that world-peace should be just as obviously desirable. That's the poetry (or imagery) that is conveyed by the phrase and why it sticks in people's minds. But that's beyond the grammar of it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I must concur with deceptikon on this... Be careful about self-proclaiming yourself an authority in a matter that you clearly only have cursory knowledge of. I have looked at your top three entries (the character testing thing, the switch-statement thing, and the recursive linear search (yuk..)). They are all poorly chosen examples, and contain some serious problems.

First of all, if you want to "teach", you also have to "teach by example". You cannot put up example code that disregard most rules of code-clarity. You have to be extra careful about indentation, spacing, and exposition of the code in general. In that regard, this example code is quite aggregious:

#include<iostream>
using namespace std;
int main()
{
char grade; double gpa=0.0;
cout<<"Enter your Grade=  ";
cin>>grade;
switch(grade)
{
  case'A':
  case'a':
  gpa=4.0;
  cout<<"your GPA is "<<gpa;
  break;

    case'B':
    case'b':
    gpa=3.0;
    cout<<"your GPA is "<<gpa;
    break;

     case'C':
     case'c':
     gpa=2.0;
     cout<<"your GPA is "<<gpa;
     break;

      case'D':
      case'd':
      gpa=1.0;
      cout<<"your GPA is "<<gpa;
      break;

       case'F':
       case'f':
       gpa=0.0;
       cout<<"your GPA is "<<gpa;
       break;

    default:
    cout<<"invalid grade entered";
    break;
  }
return 0;
}

That just looks terrible, regardless of the context (professional or academic). Just the most minimal standards of clarity mandate that you should at least have something like this:

#include<iostream>

using namespace std;

int main()
{
  char grade; 

  cout << "Enter your Grade=  ";
  cin >> grade;

  double gpa=0.0;

  switch(grade)
  {
    case 'A':
    case 'a':
      gpa = 4.0;
      cout << "your GPA is " << gpa;
      break;
    case 'B':
    case 'b':
      gpa …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think the problem with this has to do with the definitiveness of the word "war". For example, if we say "No shit?!" (which is a contraction of "is what you're telling no shit?!"), the "no" is appropriate (as opposed to "not") because the word "shit" is indefinite (or abstract). In the opposite case, like if you say "by the lack of smell, I can say that the brown stain on the floor is not shit", you use "not" because of the concrete / definite use of the word. The ambiguity with the word "war" is that it can be both also. However, in the context of a verb like "make", the complement must be definite, i.e., you cannot "make" something abstract or indefinite. "Make love" is a concrete act, and so is "make war". And also, another way to see it is that "not war" is a contraction of "do not make war", where the "do make" is implied by the previous "make love", i.e., it is "do make love, do not make war" becoming "make love, not war". But in indefinite cases, you would use "no", such as saying "we don't want no war with you" (which is, in itself, an interesting structure).

ddanbe commented: Interesting! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Also, another little note on the first minor issue with the ordering of the conditions. In general, the rule when you have to do a number of checks like this at the start of a function is that you should always start with the checks that are most likely to fail (i.e., the more common occurences), and end with the least likely to fail. This way, you avoid unecessary checks in most cases. Your example is a clear case of that, the likelihood that a fraction is negative is about 50%, which is much more than the likelihood of a divide-by-zero or a zero numerator.

Also note that the check for the numerator being 0 is not strictly necessary, and might be detrimental to the performance, since, in general, a condition check (branch) is more expansive than a trivial degeneration of the algorithm that produces the same answer. However, in this case, I think it is OK to have it because, without it, you end up doing two conditional checks (the while loop conditions) before getting at the value of 0.

Also, I just noticed that the while loop conditions should have a greater-than-or-equal tests:

Int num = numerator.abs();
Int den = denominator.abs();
Int ret = 0;
Int mask = 1;
while( num >= den ) {
    den <<= 1;
    mask <<= 1;
};
while( !(mask & 1) )
{
    den >>= 1;
    mask >>= 1;
    if( num >= den ) {
        ret |= mask;
        num -= den;
    }; …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

One minor thing, I would probably move this test:

if (numerator.sign()!=denominator.sign())//result is negative
    return -((-numerator)/denominator);

to the start of the function to avoid a double testing for the 0 conditions. Currently, if you have a negative, non-zero fraction, you first test both num and den for being zero, then, you see that the fraction is negative, make a recursive call in which you will, again, test both num and den for being zero. Simply moving the negativity test to the beginning will solve that minor inefficiency.

But, of course, the major thing is the following loop:

Int ret=0;
while (num>den)
{
    ret++;
    num-=den;
}

This is linear in "ret", meaning that you just repeatedly subtract den from num until you can't do it anymore. This seems terribly inefficient. I would recommend using a method that does it in O(log(N)) time instead of O(N). Here is a simple method to do it:

Int num = numerator.abs();
Int den = denominator.abs();
Int ret = 0;
Int mask = 1;
while( num > den ) {
    den <<= 1;
    mask <<= 1;
};
while( !(mask & 1) )
{
    den >>= 1;
    mask >>= 1;
    if( num > den ) {
        ret |= mask;
        num -= den;
    };
};

At least, this does the work in log(N) where N is the value of "ret" and log is base 2. If you didn't understand this, it is quite simple, you multiply "den" by 2 as many times as …

ddanbe commented: ONce again, showing deep knowledge +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Another open-source operating system you might want to look at if you are interested in this is the FreeRTOS project. This is a very small real-time operating system designed for embedded system (micro-controllers, i.e., very tiny computers on a single chip). This might be an easier introduction to the practical aspects of OS development, because the source code is only a few thousand lines of code (as opposed to Linux which has 15 million lines!).

But one thing is for sure, OS code is never pretty. It's a long and tedious sequence of bit-fiddling and book-keeping.

how is it Operating Systems are made?

Obviously, there is far too much here to explain it all, and the details are far beyond my own knowledge of the subject. But, essentially, operating systems are written like any other library or application, except that you have almost nothing to begin with. Without an operating system, you don't have file I/O, you don't have threads, you don't have peripherals of any kind, you don't have dynamic memory allocation, you don't have any protections (such as preventing access to the wrong memory), and so on... this means that every little task can become quite tedious, i.e., very "low-level". But, by definition, the code is kind of "simple", close to the metal.

Generally, the architecture of an operating system has many parts, one for each of the main "features". But the most important concept is that of kernel-space vs. user-space. An operating system …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

and then do a batch install of all the packages of your VM installation, and then, an rsync to retrieve all the home-folder contents.

I don't get this part. Could you extend?

Yeah, that was a bit too dense in Linux jargon. Let's say you use a Debian-based distro, then you would be using dpkg and apt-get to install software. On your VM, you can do this:

$ dpkg --get-selections > list.txt

which will produce a file "list.txt" that contains a list of all software packages installed on your system. Then, on the new system (fresh install), you can install all those packages by doing this:

$ sudo dpkg --clear-selections
$ sudo dpkg --set-selections < list.txt
$ sudo apt-get autoremove
$ sudo apt-get dselect-upgrade

So, that's the first part (the "batch-install of all packages of your VM").

The second part is quite simple. As said in the link I gave, you can convert your VM image into a raw disk image:

$ qemu-img convert your-vmware-disk.vmdk -O raw disk.img

and then, in the new system, you can mount the image to a folder:

$ sudo mkdir /media/VMimage
$ sudo mount -o loop /path/to/disk/image/disk.img /media/VMimage

And then, all you need to do is rsync the home folders:

$ rsync -vrtz /media/VMimage/home/username/ ~/

And that will make your home folder in your new system identical to the one on the virtual machine. If you need to sync any other folder, do so, …

RikTelner commented: Much better than all other rude, psycho Linux fans. Finally found someone who can explain things as normal and as plain as possible to someone of my knownledge of Linux (== null). +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You can certainly do it. There are a few ways, depending on what customizations you made and what the destination is (dual-boot, etc.). But it is certainly possible. For example, see the instructions given here.

If your customizations are limited to the list of installed packages and the contents of your "home" directory, then you can just make a new installation on your real HDD, and then do a batch install of all the packages of your VM installation, and then, an rsync to retrieve all the home-folder contents. That's the way I would typically do a backup and migrate thing for the usual scenario of "I want to migrate all my installed software and files".

But for a complete migration, just use the image-dumping techniques provided in the link I just provided.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Any idea what the difference is?

From a quick search, it seems that this sums it up pretty well:

"The big difference is that the Play is really centered around internet apps / services and you can only stream from attached storage or from a media server. It also cannot play meg2 video.
The Live can play most files from attached storage and network drives via media server or network shares. It can also play DVD iso image files with menus."

I would recommend the Live version, as I think it will be more feature-complete. It seems that Play just has a number of arbitrary limitations that might just get annoying (more of these "why the heck doesn't this work!" moments).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I also want to mention that you can plug in a (wireless) keyboard with the USB plug. That will be much nicer for navigating the menu, finding your media, and when connecting to the internet. Doing everything with the remote is a bit of a pain. Also, you can control it with your smartphone, but I haven't tried that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We don't do people's homework here. You have to show that you are actively trying to solve the problem on your own, and ask specific questions about your code and how to fix it. Please show what you have done.

I would advise that you just start by trying to solve the problem without templates.. i.e., just do it for "int", and then, make it a template afterwards, when you know it works.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It's about 100 bucks, you can get them in your local tech store, like FutureShop or BestBuy.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I always find it funny how interviews, which presumably are aimed at getting to know that you are not just a pedestrian CS student and that you have some real experience and insights, end up asking a bunch of pedestrian CS questions.

1) What's the difference between an abstract class and an interface? Which do you use more?

I don't care much about this, I'm primarily a C++ guy, and we don't make such a forced distinction (which exists only because of a language limitation in Java). I generally favor simple inheritance hierarchies and more reliance on composition than inheritance, and for those reasons, I use abstract base classes more than interfaces because they make more sense in that kind of structure. The use of interfaces encourages the creation of monolithic classes with too many disparate functionality and purposes. I prefer the composition of nuclear objects of classes with simple-and-flat inheritance and a single, well-defined purpose.

2) How does garbage collection work in Java?

That is none of my concern as a programmer (as jwenting says). All I need to know is that I create stuff, share it at will, eventually discard it, and that at some indeterminate time in the future that discarded memory will be freed (and recycled). If I have to guess, the JVM probably implements this with some sort of reference counting or some other semaphore-like mechanism, and regularly cleans up any memory that is no longer referenced anywhere (and thus, no …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Personally, I have a HD WD TV Live, which is kind of like Roku, but cheaper. It works perfectly for this kind of application. The box connects wirelessly to the router, can see either shared folders (samba) or UPnP media server (which I recommend, and is easy to set-up on your computer). It allows you to view all your media on the TV directly, and with the media server thing, you don't have to worry about decoding video and stuff because it always works (i.e., if the box itself does not have the required codecs, it will get the computer (server) to decode the video for it). I'm 100% happy with that product, and it's less than 100 bucks to buy, and no subscriptions. It also has internet capabilities (through your network), like for youtube and netflix, if you ever want to use that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, that would seem like a bug, or at least, very bad static analysis on the part of MSVC. I use mostly GCC or Clang, and I have never seen warnings for such code (i.e., if-else with no return afterwards, which is quite common). I'm quite surprised that MSVC (from VS2010) so incompetently produces such a bogus warning (and that's coming from someone who already has a very low opinion of MSVC compilers).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hi, and welcome to Daniweb!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Technically, you could just overload the operators for the classes std::ofstream and std::ifstream. Whenever the stream object is of a class derived from the file-stream classes, then it should pick that overload instead of the non-file-stream overloads, just because the file-stream classes are more derived (specialized) than the non-file-stream classes. So, you could just do this:

friend std::ostream& operator<<(std::ostream& os, const CBase& obj)
{
    os << obj.iMyInt << " " <<
          obj.fMyFloat <<  " " <<
          obj.dMyDouble <<  " " <<
          obj.cMyChar;
    return os;
}

friend std::ofstream& operator<<(std::ofstream& os, const CBase& obj)
{
    os.write((const char *)&obj.iMyInt, sizeof(obj.iMyInt));
    os.write((const char *)&obj.fMyFloat, sizeof(obj.fMyFloat));
    os.write((const char *)&obj.dMyDouble, sizeof(obj.dMyDouble));
    os.write((const char *)&obj.cMyChar, sizeof(obj.cMyChar));
    return os;
}

friend std::istream& operator>>(std::istream& is, CBase& obj)
{
    is >> obj.iMyInt >> 
          obj.fMyFloat >> 
          obj.dMyDouble >> 
          obj.cMyChar;
    return is;
}

friend std::ifstream& operator>>(std::ifstream& is, CBase& obj)
{
    is.read((char *)&obj.iMyInt, sizeof(obj.iMyInt));
    is.read((char *)&obj.fMyFloat, sizeof(obj.fMyFloat));
    is.read((char *)&obj.dMyDouble, sizeof(obj.dMyDouble));
    is.read((char *)&obj.cMyChar, sizeof(obj.cMyChar));
    return is;
}

The problem with this scheme, however, is that it won't work if, at some point, the file-stream objects get casted to non-file-stream class references. This could be problematic if you compose more complicated use-scenarios.

To solve that problem, you could use a dynamic_cast within the operators:

friend std::ostream& operator<<(std::ostream& os, const CBase& obj)
{
    std::ofstream* p_ofs = dynamic_cast<std::ofstream*>(&os);
    if( p_ofs == NULL ) {
        os << obj.iMyInt << " " <<
              obj.fMyFloat <<  " " <<
              obj.dMyDouble <<  " " <<
              obj.cMyChar;
    } else {
        p_ofs->write((const char *)&obj.iMyInt, sizeof(obj.iMyInt));
        p_ofs->write((const char *)&obj.fMyFloat, sizeof(obj.fMyFloat));
        p_ofs->write((const …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

precompiled headers are NOT microsoftism they are part of C++

That's half-true. Precompiled headers are certainly not part of standard C++, in the sense of what the ISO standard document describes (which is admitedly also the case for most practical compilation mechanisms). However, it is a common feature that many C++ compilers provide. And in that sense, the feature itself is not a microsoftism. But then again, the way that MSVC implements precompiled headers (using the stdafx.h header) is unique to MSVC and is certainly not portable (and is quite impractical and annoying). Other compilers implement this feature differently, and in much less intrusive manner (just a command-line option to switch it on for all headers). And some compilers don't implement it at all.

If you want to use pre-compiled headers in a cross-platform, portable library or application, then you will have to rely on a cross-platform build system, such as cmake, to handle the specific method different compilers use.

And yes, unless you have a large project, there is no point in using pre-compiled headers, but when you do have a large project, they can be very beneficial to speed up compilation.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I found out the way C++ passes arrays around (via a pointer) makes finding the end of the array impossible!

That's true, but you can keep track of how large the array is. That's the usual method.

So my question is how do developers make these string classes?

Typically, an implementation of the standard string class would have three data members:

class string {
  private:
    char* ptr;
    std::size_t size;
    std::size_t capacity;
  public:
    // ...
};

The pointer points to the start of the array of characters. The size keeps track of how many valid characters there are in the string. And the capacity keeps tracks of how many characters the array can hold (i.e., size of the array). Whenever you make an addition to the string and the capacity is insufficient to accomodate the new string, it would just allocate another array that is big enough (with some added margin) to hold the new string, copy all the characters to that new location, and delete the old array. Pretty much everything else is straight-forward from there.

Typically, a real string class implementation, such as that which comes with your compiler, will be much more complicated than that, because of performance optimizations and things like that. But the point is, you can implement a string class quite easily with the above data members.

Do they use C++ or some other language, possibly assembly and plug it into C++ by using includes?

For the …

rubberman commented: Good answer Mike. +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"partition signature != 55AA"

This error has nothing to do with FreeDOS or any other OS. This is an error message issued by the BIOS to tell you that your partition table is corrupt. This means that you either have an unpartitioned disk or that you have corrupted the master boot record of your hard-disk. In other words, you seriously messed-up on the installation of FreeOS or whatever else you did to your hard-drive (or USB drive). Did you forget to make your USB drive bootable? USB drives are not bootable on their own, they need a master boot record (MBR) to become bootable.

It does not support far more.

It depends what you mean by that. "computer" is a general term, if by that you only mean desktop PCs and laptop PCs, then, no. PCs are sold with the intent of running Windows, and all the hardware manufacturers need to provide Windows drivers for their hardware in order to be useful at all. That's the only reason why Windows works on all PCs, because all PCs are made to work with Windows. If you talk about computer systems in general (PCs, micro-PCs, micro-controller, embedded systems, servers, mainframes, industrial controllers, etc..), then Linux supports or can be made to run on virtually every piece of hardware that exists, while Windows is limited to PCs that use one of a few Intel instruction sets. That's what I meant.

If you want to be building an OS from …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is the corrected code:

StrBlob.h:

#ifndef EXERCISE_STR_BLOB_H_
#define EXERCISE_STR_BLOB_H_

#include <vector>
#include <string>
#include <memory>

class StrBlobPtr;

class StrBlob {
  friend class StrBlobPtr;
public:
  typedef std::vector<std::string>::size_type size_type;
  StrBlob();
  StrBlob(std::initializer_list<std::string> il);
  std::size_t size() const { return data->size(); }
  bool empty() const { return data->empty(); }

  // add and remove elements
  void push_back(const std::string &t)
    { data->push_back(t); }
  void pop_back();

  // element access
  std::string& front();
  std::string& back();


  StrBlobPtr begin();
  StrBlobPtr end();

private:
  std::shared_ptr<std::vector<std::string> > data;
  // throw msg if data[i] isn't valid
  void check(std::size_t i, const std::string &msg) const;
};

#endif

StrBlobPtr.h:

#ifndef EXERCISE_STR_BLOB_PTR_H_
#define EXERCISE_STR_BLOB_PTR_H_

#include <vector>
#include <string>
#include <memory>

class StrBlob; // forward declaration

class StrBlobPtr {
public:
  StrBlobPtr(): curr(0) { }
  StrBlobPtr(StrBlob &a, std::size_t sz = 0);
  std::string& deref() const;
  StrBlobPtr& incr();  // prefix version

private:
  std::shared_ptr<std::vector<std::string> >
    check(std::size_t, const std::string&) const;
  std::weak_ptr<std::vector<std::string> > wptr;
  std::size_t curr;
};

#endif

StrBlob.cpp:

#include "StrBlob.h"
#include "StrBlobPtr.h"

StrBlob::StrBlob() : data(std::make_shared<std::vector<std::string> >()) { }

StrBlob::StrBlob(std::initializer_list<std::string> il) :
  data(std::make_shared<std::vector<std::string> >(il)) { }

void StrBlob::check(std::size_t i, const std::string &msg) const
{
  if (i >= data->size())
    throw std::out_of_range(msg);
}

std::string& StrBlob::front()
{
  // if the vector is empty, check will throw
  check(0, "front on empty StrBlob");
  return data->front();
}
std::string& StrBlob::back()
{
  check(0, "back on empty StrBlob");
  return data->back();
}
void StrBlob::pop_back()
{
  check(0, "pop_back on empty StrBlob");
  data->pop_back();
}

StrBlobPtr StrBlob::begin() { 
  return StrBlobPtr(*this); 
}

StrBlobPtr StrBlob::end() { 
  auto ret = StrBlobPtr(*this, data->size());
  return ret; 
}

StrBlobPtr.cpp:

#include "StrBlobPtr.h"
#include "StrBlob.h"

StrBlobPtr::StrBlobPtr(StrBlob &a, std::size_t sz) : …
Vasthor commented: great tips, and cleared up the things that I thought about the book, it seems the write doesn't even compile the code s/he wrote. +2
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Writing a file format that is forward and backward compatible is very easy, and there are many formats that are structured like that. I've dealt and created many file formats like that, and from a programming perspective, creating such formats is actually easier than any other kind of formats.

If Microsoft had used a tagged format then they could have maintained backward compatibility.

There is absolutely no doubt in my mind that incompatible formats are always deliberately made to be. There is a clear economic advantage in creating incompatible formats with each new version of the software, it's part of planned obsolescence.

The worse offenders in my field are CAD software, especially Pro/Engineer and SolidWorks. Every version, and every different license (student, individual, company) comes with a mutually incompatible file format, and really terrible options for exporting to other formats. Basically, once a company commits to a particular CAD software, they have to use it forever, and they have to update everyone's software every year or so. This is a huge and well-known racket.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Does LFS support all computers?

Pretty much, yes. Far more computers than MS DOS or Windows supports. Windows pretty much just supports x86 / x86-64 architectures. Linux systems support (or can be made to support) pretty much every exotic hardware you can think of. I would imagine that MS DOS (or a clone thereof) would support far fewer platforms even.

With video card problems

If what you are aiming for is a pure terminal / command-line operating system, then video card support won't matter. Such an operating system can run off the display functions of the BIOS directly, which means that it does not need graphics card drivers at all. That said, graphics drivers for Linux are now pretty good (with most companies providing proprietary Linux drivers), so that problem is mostly "behind us" at this point, but there could still be some problems, but nothing that will prevent things from working. And if you don't need a GUI, then this is a non-issue.

The only other thing you could have trouble with are wireless cards. This is one of the remaining problematic hardware. I would say that probably 95% of wireless cards will work out-of-the-box with Linux, but some are still a problem. And that is something you might want to have, even in a command-line operating system.

harddisk problems

There are no hard-disk problems with Linux. Most of the file-system drivers and kernel modules that Linux uses come from the development of …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The difference between a=b and a=(A)b (or a=A(b)) is that in the former case it says "assign b to a", and in the latter case it says "create a A object with b, and assign it to a". The point is that because the assignment operators of A and its constructors both have the same acceptable signatures, const A& or int, the ambiguity is the same whether you have "assign b to a" or "create A with b".

Is there a way to explicitly call a typecast operator?

Yes and no. Whether you use (A)b, A(b) or static_cast<A>(b), the situation is the same "create A with b", and as such, will be ambiguous. However, operators can be called explicitly, i.e., the operator functions are fundamentally just functions with a special name that allows the compiler to apply the operator syntax (e.g., operator + allows the compiler to apply it to a + b). But, as functions, they remain callable explicitly with the function syntax, so you can do this:

a = b.operator A();

to explicitly call the operator A member function of the B class. This trick applies to all other operators as well and can be useful to remove an ambiguity. This explicit call will say to the compiler that it should first call that function, which returns an A object, and then assign it to a, which is unambiguous.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Clearly, the paths that are ambiguous are: B -> A -> const A& -> A; and, B -> double -> int -> A. I know this because if you remove the double conversion operator it solves the problem (in both cases). Or, you can also make the operator explicit (C++11):

class B
{
    int val;
    public:
    B(int i){val=i;}
    operator A(){return A(val);}
    explicit operator double(){return 0.0;}
};

Now, a reasonable person might expect that the B -> A -> const A& -> A should be preferred over the double-int-path because the binding of an A object to a const-reference const A& seems like a much more straight-forward conversion than the double to int conversion. However, these primitive types are considered arithmetic types, and as such, are given an equal status to other implicit conversions. This is just one of those annoying realities (and due to the C legacy of C++) that you have to watch out for. It is generally dangerous to play with implicit conversions because of these kinds of pitfalls. I tend to stay away from implicit conversions in general. At least, be conservative and mark most of your conversion operators / constructors as "explicit", and selectively make implicit conversions. That's my advise.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Update: It just happened again, and this time I looked more closely at the browser's status, and lo and behold, it's "Waiting for www.daniweb.com.." that hangs the longest, by far (like 10-20 seconds).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why on Earth would you want to recreate DOS? DOS was terrible when it came out, and now, it belongs in a museum. Like neanderthals, it's a relic of the past, and a misfit that was doomed for extinction.

If you want a real powerful command-line operating system, you need to look at Unix-like systems, such as various distributions of Linux or BSD. The set of tools and commands available in a "bash" or similar shell environment on Unix-like systems dwarfs the pathetic features of the DOS environment.

Why DOS? It is fast, plain, looks professional and is 100% editable.

DOS is neither of those things. Well... "plain" maybe. DOS is slow, feature-deprived, unprofessional, substandard, and closed-source.

So, how to create this? Just like it looks in DOSBOX.

Well, DOSBox is an open-source project, so, you can just look at that code, that's gonna be a pretty good start. And, how it looks (and it just looks like any other command-line OS) is not really important. In an operating system, what really matters is the kernel, the API, and the shell environment (e.g., bash). That's the stuff that is really hard to re-create. As far as I know, DOSBox just uses the underlying OS for its kernel functions, and then, replicates the MS DOS API and command-line environment (CLI) that faces the user. The MS DOS's API and CLI are among the simplest of all OSes (i.e., also the most pathetic in terms capability and features), …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I get the same lagging issues. I also think it is a server issue because if I click on a link to a forum and it lags, it will show the "Waiting for.." at the bottom corner of my browser (Chrome). I assume that this means it's the server that is slow to respond. The next time it happens, I will try to see which domain is lagging (www.daniweb.com, or one of the social media domains). But it doesn't look like a javascript problem because the display of the pages is fast, it's just that there is often an unusually long delay (several seconds) before it initiates the switch. At least, that's the behavior I've experienced so far.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I second rubberman on his suggestion. I use ffmpeg whenever I need to do any kind of video / transcoding work, it's phenomenal. But I would just like to add that technically ffmpeg is deprecated in favor of "avconv" from the Libav project. avconv is technically an almost drop-in replacement for ffmpeg, it is also a command-line tool (although I think there are some basic GUIs for it), and it is more up-to-date. Essentially, the Libav project is a fork of ffmpeg (i.e., avconv is derived from ffmpeg), and has now surpassed ffmpeg, thus leading the ffmpeg people to pretty much just put a notice on their program saying that it's deprecated and people should use avconv instead. But ffmpeg is still a good tool, of course, but just getting out of date.

But more importantly, avconv is available in Windows directly. In fact, Libav is what VLC uses for all its video work, so, if you know VLC, you know how superior it is to anything else out there, well, that's because Libav is superior to anything else out there.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"I seen it with my own two eyes."

Well, one has to be precise. It is reassuring to know that his eyes are his own, and that he only has two, and that he sees with his eyes and not some supernatural ability. It's important in this day and age to make it clear if you are neither a Frankenstein-ian monster nor a mutant. I'd say these people are ahead of their time ;)

I get called a pedant and a tw*t (pick your own vowel) when I return corrected reports. Soul destroying. :(

That's a good point. To that I would add that this whole grammar-nazi thing is also starting get on my nerves. I think that good grammar and writing well should be conveyed more in the form of an art than in the form of reprimands. I think that's the difference between those who appreciate good language and those who don't. Those who don't care will only see grammar as an annoying set of rules to which they constantly have to obey to, or else, get reprimanded. Those who do care see grammar as a useful set of rules to make what they write pleasant to read, clear and fluent. That's really the only reason why I like grammar and linguistics, it's because I appreciate the beauty of well-written text, and I'm proud of myself when I'm able to hit just the right notes when writing. Grammar is not sufficient for writing well, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The difference between while and awhile is explained well here. As for "for a while" vs. "for some time", there is not much of a difference, they are pretty much synonymous, and equally vague. In terms of formal vs. familiar, I think that both are OK in any context, but I would tend to prefer "some time" (or avoid "while" to mean "time") in a formal writing context, but I don't think that's a big deal.

ddanbe commented: Thanks for the info +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is my graphics card driver: VESA: Intel® Sandybridge/Ivybridge Graphics

That's great. Intel graphics cards are the best for Linux because Intel actively develop their own open-source drivers for their cards, and because they are open-source, they are included in Ubuntu by default. This means that not only are they up-to-date, but they are there by default and are backed by solid and active development. With Nvidia and ATI, the situation is quite a bit worse (but they are starting to catch on).

So, I would pretty much discount that as being the source of the problem, given that it's Intel and that it's very new.

Just try what has been suggested already.

Also, not all projectors are created equal. Some projectors are really annoying and simply don't work with all computers (even Windows / Mac computers).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here are my 2 cents (that is, 2 Canadian cents, and because pennies have been put out of circulation and prices are rounded to the nearest 5 cents, I guess my opinion is really worth nothing ;) ).

I haven't used MS Office in years, but not really in favor of LibreOffice (or OpenOffice). For writing documents, as the saying goes, once you start using LaTeX, you never look back. I could no longer imagine writing anything serious with MS Word or an open-source equivalent. The flexibility, speed and ease with which I can do things in LaTeX just makes a WYSIWYG-style word-processor program extremely unattractive. The same goes, but to a lesser extent, for presentations (e.g., power-point) through LaTeX/Beamer. I only miss the ability to create fancy animations, but if I really need that, I find LibreOffice's alternative good enough. The hassle-free, STEM-friendly nature of LaTeX and the picture-ready documents it creates is just unparalleled (btw, STEM: Science, Tech., Eng., Math.).

As for Excel, I do find myself using the LibreOffice Calc once in a while for very small tasks (because it can't really handle large data anyways, neither can Excel, of course). I do have to be careful when using it, because if any colleague saw me using it, it would be quite embarrassing. Engineers don't use Excel, unless they want to attract ridicule. We mostly use Matlab or Octave, which are both much more powerful and flexible than Excel. That is, if not using more specialized tools. …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think the most interesting ongoing development in this area is the IBM neurosynaptic chip design. These could be the next big DARPA-backed game-changer.

When you figure them out, apply for that Nobel Prize in Medicine...

Well... there are really two parts to this: the logic and the bio-chemistry. I think that the logic (how neurons, synapses, and learning works) is largely understood, and has been for quite some time, but not to say that there isn't much more to be discovered. The bio-chemistry is much harder to understand overall, i.e., the influence of chemistry on the functions of the brain.

But when it comes to artificial brains or neural networks, the bio-chemistry is largely irrelevant (maybe A.I. will just be less moody than humans). And that's another awesome prospect of this, which is that the chemical reactions that drive the signals across the brain are extremely slow and inefficient compared to electrical signals through silicon substrates. In other words, an artificial brain made from a silicon chip (like the IBM chip) would be several orders of magnitude faster at "thinking" than a human / mammallian brain.

One of the related topics in computer learning is called simulated annealing.

Well, simulated annealing is a very general method that is applicable to many areas far beyond artificial neural-networks (ANNs). In fact, I am using it in completely different kind of method (probabilistic motion planning), and it has also be used a lot in …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that the most fool-proof method is to plug it in before starting to computer (or plugging it in and restarting). I imagine that a restart of the X server should work as well (there are a few ways to trigger a restart of X).

One thing that could throw a wrench in the works is if you are using a fixed Xorg configuration. This is not the default setup, as by default, the system is set to dynamically check display devices and choose a reasonable configuration (best resolution, duplicate screens, or something like that, depending on the specs of the displays). However, there is backward compatibility for this ancient way to set things up, which involves a Xorg configuration file that specifies exactly the display setup to use, and in that case, the X server would firmly choose to use that configuration, whether it makes sense or not. In modern times, this is almost never used (and there are also other more semi-fixed configurations, such as Xrandr that are more appropriate). And if you have set up such a manual configuration, you would remember it, because this is not like flipping a switch by accident. However, if you had some graphics driver issues and used some instructions from the web on how to solve it, you might have setup such a fixed configuration without knowing about it. If this is the case (check the xorg.conf file), then you might have to find a way to revert …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I see, from a post on another forum, that you used Wubi to carry out the installation. Personally, I never really trusted Wubi (and never recommend it), in part due to the fact that Windows is still underlying the virtual disk (which is kind of unreliable, as all Windows things are), and due to the large amount of warnings and cautionary disclaimers on their own official wiki.

Here are a few of the warnings that seem like they could be relevant to your particular case:

Warning

Wubi uses a virtual disk that is sensitive to forced shutdowns. If Ubuntu appears to be frozen please refer to: How to reboot cleanly even when the keyboard/mouse are frozen

Wubi does not work on any new PC with the Windows 8 logo or using UEFI firmware. Please use a 64-bit flavour of Ubuntu, installed directly to its own partition instead. For more information see https://help.ubuntu.com/community/UEFI
If you upgraded to Windows 8 and are using BIOS firmware, Wubi does work, but do not enable hybrid-sleep on Windows 8.

This seems like a very plausible problem that you have encountered.

Which Operating Systems are supported?

Windows 7, Vista, XP, and 2000 are known to work with Wubi.

That means, no Windows 8.

Installation error while formatting the swap file

If the installation fails while formatting the swap virtual disk it means that your drive is excessively fragmented. Uninstall, run jkdefrag on the target drive and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Your assign function takes the parameter by value which means that it creates a local copy of the passed variable and changes only that. If you want the changes made in the assign function to be reflected on the variable passed to it, you need to take the parameter by reference. As so:

void assign(Number& );

//....


void assign(Number& input)
{
    int num;
    num = rand() % 10 + 1;
    if(num % 2 == 0)
    {
        input.type = Even;
        input.value.i = num;
    }
    else  if(num % 2 != 0)
    {
        input.type = Odd;
        input.value.d =num;
    }
}

Notice the ampersand & added to the parameter type, this signifies that the parameter is taken by reference (referring to the variable passed to the function).

Also, I'm using srand, but getting the same values everytime it runs.

Your use of srand is commented out of the code. If you comment it back in, it should work:

srand((unsigned int)time(0));

You do it correctly, i.e., calling it only once at the start of the program and seeding it with a often-changing value (e.g., current time).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are tons of distributions of Linux, and in each case, the installation process is a bit different. And you still haven't said if you want to go for a dual-boot or for a virtual-box. With that, it is hard to provide you with a "trusted link" with instructions because that would depend on what you want exactly.

When it comes to distributions, you should probably start with one of the most popular distributions, like Ubuntu (or Kubuntu), Mint, Fedora, or OpenSUSE. Any of these distributions will probably be quite easy to install and use. In all cases, what you want is the "desktop" version, because the alternative is usually a minimalistic or server version which often does not even have a graphical interface at all. For the reliable instructions on how to install them, you'd better refer to the official instructions that those distributions have on their website, like the side links on this Ubuntu page.

Whichever way you decide to go (virtual box or dual-boot), the process is fairly straight-forward. First, you should create a LiveDVD or LiveUSB, which is both the install media (so you have to create it anyways) and also a "Live" version of the OS, meaning that you can boot into it and it will be pretty much like it will be when you install it (but slower because it runs off the USB/DVD drive). With that, you can try the distribution …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It seems to me like a connection problem of some kind. It's possible that there is a bug with the game, with its server communication protocols, or with its compatibility with Win-8.1.

At least, make sure everything is up-to-date, that includes: Windows, graphics drivers, the game, and DirectX. If the problem persists, I think that it would be an issue to report to Blizzard. I assume it hasn't always been like that, right? If it used to work better before, then the problem is most likely due to the Win-8.1 update or a bad driver or a recent bug in the game that needs fixing.

Don't buy a new graphics card, it's extremely unlikely that this is a hardware problem, or that you have insufficient capabilities, because that box seems to be packing some serious punch.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@Mike, what did you think of Contact?

It was a good movie, but nothing more. I have always had mixed feelings about it. I never could stand the sobby ending and all that wishy-washy obnoxious preachy stuff. The movie has its moments and cool concepts, but the preachy stuff is just too forced.

But if you are talking about scientific accuracy, I don't care much when it's a sci-fi movie that isn't really trying to fit with reality. As long as it is a positive / respectful / realistic depiction of scientists and engineers, I'm happy. A good example of that is Star Trek, as per this essay.
But Contact wasn't very positive in its depiction of science or scientists, in fact, quite disrespectful (and clearly written from an outsider's perspective), and that's another problem I have with that movie. The portrayal of the main scientists is extremely unrealistic and stereotypical. I think that was really the worst part of it. It felt like the writers had no clue about this world, nor did they have any interest to find out about it, they just wrote it to fit their pre-conceived notions and the Hollywood stereotypes. I know that Carl Sagan wrote the book it's based on (and I have not read it), so, I would assume that the screenplay + directing is most to blaim for this particular problem.

Come to think of it, I probably didn't like that movie that much after all. …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, Dark City is definitely a great movie, and really underrated, or maybe it just clashes too much with mainstream.

I just saw Gravity yesterday.. yeah, I know, I'm a little late on that. The movie was really good.. very dramatic and very well shot.

But having a Masters in Space Science and Technology kind of spoiled it a bit. It was kind of hard to "suspend disbelief" when there are so many inaccuracies (maybe subtle enough for the uninitiated, but glaring for trained eyes). I understand that if it was made to be entirely accurate, it would either be one and a half hour of completely uneventful work on the Hubble telescope, or the movie would be over in 5 minutes after one shocking scene. So, I get it, reality is bent to serve the story. But still, it's hard to watch when you go "WTF!" every 2 minutes throughout the movie.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

It's the same as in C, except that the header is included with #include <ctime>, and all the functions and stuff are in the std namespace (so you have to do std:: in front of each name, or do a using-statement).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah, you Europeans get the warm ocean current smashing on your coasts, while we get it on its way back from Greenland!

I live in Montreal, Canada, 45°30′N. Today, it's -16°C / -10°C.
At the same latitude, there is Venice, Italy, where it's now 8°C / 11°C, or La Rochelle, France, where it's now 4°C / 7°C.
And I used to live in Kiruna, Sweden, 67°51'N (above the arctic circle). Currently, it's -7°C / -5°C over there. That's 1,400 km farther north, and still warmer than here!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Are you talking about template specialization, or are you talking about overloading function templates?

rubberman commented: As usual, you give good examples/links! :-) +12
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

you can convert almost anything into anything with the combination of unoconv and pandoc.