mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to post your code, or at least, a subset of it that reproduces the problem you have. We cannot help you if you don't provide sufficient information.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

XP is no longer supported by Microsoft, meaning that you don't get any patches for security vulnerabilities and stuff. That said, XP was probably the best of the Windows versions, and it's still a perfectly good OS to use, as long as that computer has no contact with the internet. That's basically what I have done with a couple of computers that run some critical legacy software on XP, I just impose very strict port-blocking and internet-blocking for them (i.e., no communication with the outside world and strictly limited communication within the local network).

Win 7 is probably the next best thing after XP. There have only really been 3 versions of Windows that were decent: 2000, XP, and 7.

But of course, none of them compare with Linux. Nothing matches Linux in terms of performance, stability, versatility, and "expressive power".

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to read the rules of this forum, especially the rule that reads as follows:

Do provide evidence of having done some work yourself if posting questions from school or work assignments

Don't copy-paste your homework assignments (or other exercises) and expect people to simply give out a fully working answer for it, because that is against our rules and it is against your best interest.

This warning is good for all your other threads too (1 2 3). If you persist in spamming the forum with copy-pasted exercise questions, then we'd have to give you an infraction for it and delete the posts.

What have you tried to do towards solving this problem? Please show us what code you are working on for this and ask questions specific to what prevents you from solving it?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, it's fixed for normal users too.

Btw, here is my tutorial (shameless plug ;) ):
https://www.daniweb.com/software-development/cpp/tutorials/492425/keep-it-hot-the-secret-to-high-performance-code

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Oh.. that's sad.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Your comparison operator takes two references, but you call it with two pointers. When you have &clock2 == clock2, you are taking the address of clock2, which gives you a pointer Clock*, and then, you compare it to clock3, which is already a pointer. What you need to do is clock2 == *clock3, which dereferences clock3 and calls the comparison operator with two objects (implicitely taken by reference in the comparison operator).

And by the way, the comparison operator does not modify the objects referred to by the references it is given. So, you should take those objects by const-reference instead (and make your "getter" member functions get_hour and get_minute const as well), as so:

friend bool operator ==(const Clock& c1, const Clock& c2)
{
    return(c1.get_hour()==c2.get_hour() && c1.get_minute()==c2.get_minute());
}

// with the getters declared 'const' like this:
int get_hour() const;
int get_minute() const;

Also, note that if you implement the comparison operator only in terms of public member functions of the Clock class, then you don't need to make it a friend function, you simply defining outside the Clock class as a normal non-friend function. Don't use friend more than you have to, as a general tip for good coding practices.

senait.kifle.127 commented: Crystal clear sir! Thanks a lot! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I never though about sorting derived objects in the container so that the same function keeps getting called. I would suppose this would also help with vtable lookups?

Yes, that's a good point. The vtable lookup is also part of it. Virtual tables, like anything else, can also get "cold". You basically get twice the cache-miss overhead on a cold function call when the call is virtual.

Actually, part of the benefit of using boost::variant (in either case of single-dispatch or double-dispatch) is that the call is based on the type index value stored in the variant object and then uses overloading. This is basically equivalent to a virtual function call (just a table lookup), except that the table is always hot because it's local to the main loop.

Since ranged based for loops get converted to using iterators would you use:
Or would you compute the size of each nested vector and use those results

I don't see the point in getting all the sizes ahead of time. Getting the size of a vector is a constant-time operation, usually just a matter of subtracting two pointers internally.

And using iterators is the most efficient way to iterate through a vector. The range-based for-loop is always the best choice, if you can manage to use it (sometimes it's just more convenient to use indices).

In your particular case, I would certainly use the range-based for-loops. Notice that you made a small typo, the nested loop …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This tutorial is the first time I write about coding techniques for improving the performance of software. This is going to be a step-by-step journey through a real example, with real benchmark results at every step so that you can see the effect of the different optimization techniques.

And as my first tutorial on performance optimization, I have to talk about the most important principle in performance optimization: keeping things hot. If you expect this tutorial to be about showing bits of code and the assembly listings generated by the compiler and then, showing how to reduce the number of instructions using nifty tricks, then you are sorely mistaken about what has most impact on performance.

When I named this tutorial "Keep it Hot", I wasn't thinking about the Cameo song from the 80s, although you're welcome to listen to it while you read this tutorial, that is, if you agree with me that a funky beat is the real secret to producing good code!

You might be wondering what it is exactly that is supposed to be kept hot. Well, software is about using memory and machine instructions to compute values. So, what should be kept hot? Values, memory and machine instructions. And what does it mean to be "hot"? It means to be in high demand, as in the expression "a hot commodity".

Background on the real example

Let me just tell you how this tutorial came about. I wrote some simple collision detection code a …

NathanOliver commented: Superb +13
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yes, it has been fixed.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Could it be related to being a moderator? As opposed to a simple member (who can't move anything) or an admin (who can move tutorials).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Hey,

So, I'm trying to create a tutorial draft for a tutorial that I just wrote. I copied all the text into the editor, put in the title, tags, and marked it as a tutorial draft for the C++ forum. I hit the submit button and I got the following errors:

"You cannot move an article of this CMS type to this forum."
"This CMS type doesn't exist."

What the heck is CMS? And, what's going on?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How could it possibly be a legacy language? A legacy language is basically a language in which lots of software exists, which still needs maintenance as it is still being used a lot, but has fallen out of favor as far as starting new projects in it, usually because of more modern languages that are more appropriate to the contemporary landscape of software development.

Some old languages are not even legacy languages, because they never reached critical mass (e.g., too few people adopted it, it was flawed in some way, or never made it out of the "academic" conclave).

The Go language is too young to know which way it will go. All languages that have reached critical mass have pretty much taken 10 years or so to take off, like C, C++, Java, and some of the other now legacy-ish languages like COBOL and Fortran. Also, most language designs are pretty terrible in their first drafts. C was pretty bad before C89, during the "K&R" rough cuts of the language. C++ was pretty bad from 1985 to 1995'ish. Java was pretty rudimentary from 1995 to 2006.

The Go language is definitely still in its incubation period. I mean, it's barely 6 years old... it's almost a toddler. The same goes for the Rust language.

As it currently is, the Go language has some nice features but it also has some major flaws, as I have discussed on this thread. If they keep those flaws, I don't think …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't think that github has any GUI client. There are GUIs for using Git, though.

I have always just used command-line tools for version control (git, svn, etc.). I never understood the need for GUIs for that. And I always find it annoying when GUIs constantly attempt to "dumb things down" by assuming some sort of default behaviour that is probably not what you want or should be doing in real-world situations. For instance, I would assume that any Git GUI would default to the "master" branch all the time, which is fine for toy examples, but nobody works on the master branch by default on real projects (you try features out and stabilize them in a separate branch before you merge it into master).

However, I've grown to love Github's website for the way it presents and interconnects things like forks, commits, issues and pull requests. It's one of the few GUIs for development that I find to be very productive and streamlined. I just wished that some of the C++ dinosaurs at LLVM/Clang and Boost would be more willing to switch to it or use it more.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with diafol, you should just use the font used in editorials.

RikTelner commented: Agreed. +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I have no idea what you are talking about with those "tags to identify the language"... do you have an example of a repo that is tagged like that... I can't remember noticing that anywhere.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

GCC: GNU Compiler Collection
So, GCC is an all-encompassing term for a collection of compilers, it is true that the gcc / gcc.exe program is mainly a C compiler (also does C++, but you have to make a few contortions to make it work) and the g++ / g++.exe program is for C++ code. Most people just say "GCC", just like people say "MSVC" (short for "MicroSoft Visual C++") instead of "cl.exe".

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't know what your issue is. I have always used SSH. I think you should consider switching to that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There's no tail recursion in that function. For tail recursion to apply, you generally need to have nothing appearing after the recursive call (the call to "back()"). In this case, a character is printed out after the recursive call, which means that the character value must be retained through the recursive call, which breaks tail recursion (or makes tail-call optimization impossible). If someone claimed that this code uses tail recursion, then that person is wrong.

The easiest way to tell if something can be tail-call optimized is to see if you can manually translate the function into an iterative one (instead of recursive) and see if you can do it without auxiliary memory. In this case, there is no way to do that iteratively without having O(N) memory. For example, one way to do this iteratively is to store all the characters into an array and then traverse that array in reverse to print out the characters, which implies that you need an array of N characters to make this work. The recursive form you have there does the same thing, except that the characters are stored in function-call stack-frames instead of in a simple array, which is just inefficient and bad style (and unsafe, by the way).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why on earth would the pointer symbol * be pushed back and be seperated from it's pointer name with the increment operator in the middle?

The * is not part of the pointer name, it's part of the type when you declare the pointer, like int* p; which means that p is a pointer to an int. But when you later do *p, the * symbol is a dereference operator, which means that *p fetches the value stored at the address pointed to by p.

So, to understand the expression *++p, you have to read it inside out. The ++p increments the pointer p (that means, it moves its address one increment forward, the number of bytes that this increment represents depends on the size of the thing that the pointer points to, like if p is a int*, then the increment is of sizeof(int)) and then it returns the final value of p after the increment. In other words, the expression returns the address that is one increment after the original address that p pointed to. And finally, the * dereferences that address, as if you did *(++p) or p += 1; *p.

I've recently read that C++ 11 makes dealing with pointers a lot easier, if that's the case does the current version of g++ support that aspect of the new C++ 11 standard?

Well, you are always going to have to understand pointers. But C++11 provides library components that help you manage the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Following the bits of research I did on Go, I started looking into another recent language, Rust, so I thought I'd share some thoughts about it here too. It's still in a very early stage of development, but it has some truly amazing features. There are a few corners that are still rough, and the language is fluctuating a lot (which is OK, because it isn't really a production-ready language yet). I especially like their statically-validated ownership / lifetimes rules that guarantee memory safety, combined with the "immutable by default" rule, which is a nice way to bring in the benefits of functional programming without the impractical aspects of it.

Also, Rust uses a mechanism similar to Go for their "interfaces", but they call them "traits" and they are closer to C++ concepts (N3701 "Concepts-lite" proposal) as they are checked at compile-time and form "generics" (which actually more like C++ templates, which means no run-time overhead, no indirection, no restrictions, unlike Java/C# generics, which are crap). But the neat thing is that they can also be cast (or coersed) into a run-time mechanism (dynamic dispatch, a.k.a. virtual functions) very easily (through what they call "traits-objects"), which is cool (and it's what I wish C++ would add to the concepts-lite feature too). But if I were to design that feature, I would make those traits-objects have value-semantics instead of the current reference-only semantics, so that they could be used directly in-place of generic parameters (sort of like I demonstrated in my …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Well, there are a few problems with that code. I cannot, however, pinpoint where that specific error is coming from, are you sure it's within that code?

Now, for the errors that I can see:

First, you should not use = NULL for pure functions, you should use = 0. Many compilers will reject this code.

Second, you've made a spelling mistake with the "AbstractArrayClass". In the declaration of that class template, it is written as "AbstactArrayClass", and in the AbstractVector, it is written as "AbstractArrayClass". Notice the missing "r" in the first one.

Third, the friend declaration of operator<< does not match the function template definition. In fact, such friend functions are not templates at all. This is one of the more quirky and confusing parts of the C++ rules. A non-template friend function declaration for a class template declares a non-template function for each instantiation of the class template. ... I know.. this might sound like gibberish.. but read it very carefully.

For example, if you have this:

template <typename T>
struct foo {
  friend void bar(foo<T> f);
};

The bar function is not a template. What happens is that when you instantiate the foo class template for some type T, let's say you create a foo<int>, the compiler will magically make a function bar appear with the signature void bar(foo<int> f);, and that function that magically appears is not a template. So, if you declare another bar like this:

template <typename T>
void bar(foo<T> …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think it boils down to picking your battles. The small stuff like naming conventions, indentation and stuff like that is not worth breaking a sweat for, just follow the guideline and get used to it, and build your finger muscle-memory for it.

But coding restrictions that you think lead to sub-optimal designs or bad code ("bad": poor performance, unmaintainable, unreadable, unsafe, etc..), then you should probably just sit on it until you find a good opportunity to demonstrate (with real code in the real project) why your way would be better (better: performs better, more maintainable, more readable, safer, etc..), and then, use that real example to make an argument to change that restriction. And either way, doing this will be good for you, because it will either confirm and demonstrate that you're right to do things that way, or debunk your own practices as being either equivalent or worse than your company's coding practices.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I've never looked into this language before you mentioned it. It has some interesting elements. I'm glad you got me to look into it.

First, I love the built-in concurrency support in Go. It has Ken Thompson's finger-prints all over it. It's basically structured like Unix pipes, which are awesome. This whole way of doing concurrency is something that I have grown to love. I wish there was more support for this in C++, the C++11/14 mechanisms like future/promise and async go a long way, but are not quite there yet, but there is more juicy stuff to come in C++17, so I've heard.

Second, the ditching of inheritance in favor of "interfaces" (as it's called in Go) is another cool decision. I use this pattern all the time in C++, and I much prefer it to the traditional OOP patterns (inheritance), I even wrote a tutorial about one form of it. It's part of a trend, mostly born out of modern C++ practices, that is about realizing that dynamic polymorphism is great but inheritance is just the wrong way to do it. Inheritance is still useful for boilerplate code in generic programming (and a few other techniques), but since Go doesn't support generic programming, I guess it doesn't matter. But I love the way Go makes the creation of such "interfaces" so seamless and clean. C++ is about to have a feature called "concepts" that I believe could be used for a similar purpose, I'm seriously considering …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This video is also a great explanation of this issue, mainly in the form of a how not to do it and why, followed by the proper solution used today, which is hashing + salting.

The basic idea is really that you don't need to store a password in plain text, or be able to retrieve its plain text representation, because all you need is to be able to validate the password given (when logging in). So, you store the password in some "encrypted" way (actually, with a salted hash) and you just compare it (to validate it) using that encrypted form.

For instance, this is the reason why when you've lost your password (can't remember it), you cannot get that password back, all you can do is get a new password generated for you or some temporary link to reset the password. There are still places that store passwords in plain text (or in a way that the plain text passwords can be retrieved), but they shouldn't do it, and if you realize that any important site or service uses that method, you should avoid having an account with them, unless that account is "harmless" (e.g., like a mailing-list subscription, or something like that, which doesn't store any sensitive information). And obviously, if you have to be subscribed to a service that stores plain text user passwords, then make sure you don't use the same password(s) as for your more sensitive accounts (email, paypal, computer login, etc..).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Whether it's in software engineering or other engineering fields, I don't see how a university could justify having a reverse engineering course. Sure, there are some legitimate reasons to do reverse engineering some times, but it's the exception more than the rule and, at the end of the day, you don't really need any special knowledge in addition to your normal "forward" engineering knowledge.

I did some reverse engineering work once, for legitimate reasons, because the company I was working for had an old product that they had been mass producing for decades but had lost all records of its design, and the guy who designed it was dead. But such cases are rare, and when you have to do it, there's no secret to it, it's just the same knowledge and skills required. But it sure is fun to do though! It's a little bit like technological archeology, where the final product is all that remains of a long and complex design process that you have to attempt to reconstruct.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Analog computers already exist, and they are pretty great. AFAIK, they have fallen out of favor due to modern computers being so powerful now. In the older days though, analog computers were a great way to perform complex calculations. They used to use them a lot for things like control systems for automation equipment and stuff like that. One of the big advantages they have is that not only can you compute complex equations with real numbers (actually, complex numbers too) directly without having to rely on a digital representation of them, but they can also simulate dynamic systems in real time. For example, if you had a mechanical systems (like a mechanism or robot), you can simulate its dynamic behavior in real-time using an electrical circuit that replicates it (most mechanical things have an electrical analog, with the movement of electric charge playing the role of the movement of objects and the voltage playing the role of the forces on those objects). Also, things like control systems and signal processing filters are often formulated as dynamic systems that can be realized with an analog electric circuit. So, back in the days when digital computers were just impractical, things like early industrial robots or other similar automated machinery used analog computers (specially designed electric circuits) for the signal processing, control systems and even kinematics and dynamics calculations (for things like model-based control and closed-loop inverse kinematics).

Today, analog signal processing is still a very important first step to any complete …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you look at the practical example section, where they show you how to visualize the garbace collector using that VisualGC tool, you can clearly see that the generational GC is what is being used in practice.

The point really is that the "mark-and-sweep" process is the fundamental mechanism by which garbage collection is done. And the mark and sweep is still the way that the generational GC works, it's just that the generational approach is an improvement over a basic mark-and-sweep by creating a distinction between short-lived and long-lasting objects. It is merely taking advantage of the fact that when you are creating and throwing away large quantities of small objects quickly, then it is better to do mark-and-sweep passes much more frequently (to avoid letting the memory usage grow too much or be too full of garbage before collecting it). And conversely, when you have long-lasting objects (e.g., singletons, "main window" object), you can probably just let them sit for a longer time without checking for garbage. So, all that the generational approach does is to split the memory into a section of "young" memory that is marked-and-swept very regularly, and a section of "old" memory that is marked-and-swept far less frequently, with some additional logic to be able to promote things from young to old.

Probably some older JVM versions used a vanilla mark-and-sweep with no generational segmentation. But I think that modern versions are more refined than that. But again, generational GC is not an alternative

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to include the "PL.h" file within your PosLin.cpp source file. The problem is most probably that your implementation of SurTriPosRotAndQ requires a complete type for PosRotAndQ, not just a forward declaration. So, if you include the PL.h file from the cpp file, you will have the complete declaration available when implementing SurTriPosRotAndQ. This is the normal way to work with forward declarations, you forward-declare the class in the header and include the header for that class in your cpp file, where you need the complete type.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I would echo the same sentiment as others have, no one could question the "morality" of choosing to participate (voluntarily) more to one forum or another, or none.

I also visit SO regularly and post on occasion. But I feel that I wouldn't be able to keep my sanity if I spent too much time there. There are way too many "Wheaton's law infractors" over there (for lack of using a more explicit word that wouldn't pass the bad-language filter). I've been scolded too many times over there for being too "daniwebby" with my answers, i.e., being helpful, opinionated, nuanced and original. I think they very much prefer answers that are short, black-or-white, peddle preconceived notions and play in their echo chamber, and they are really quick with the "copy-google-click-copy-paste-post" automatism that just produces uninteresting junk answers. Some of the highest profile members on SO are really living in a bubble, and I sometimes fear for their mental health. Especially since my interest is C++, some of the guys over there are just fanatical about standard guarantees and stuff like that, and they seem to have lost all notions of real world programming (e.g., I've even had someone prominent tell me that it is undefined behavior to pass an integer to a C function from another C library, which is technically true but a completely ridiculous notion in real world programming, this is the kind of la-la-land stuff you constantly have to put up with on SO).

But if this …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Yeah. I remember now, that was the extra step I had to do to claim the cash-out reward, because I was in that situation where my daniweb account had a different (old) email address that wasn't the same as the one used for my paypal account.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you are going to be hacking away at the Linux kernel, you have to be prepared to deal with tons of specific data types. And that's not even a tiny fraction of all the "exotic" stuff you'll see in there. Kernel coding is a dark art, and lots of shadow monsters lurk in the darker corners of the Linux kernel.

Why use these types instead of the usual data types?

Well, for one, the size_t is a very usual data type, it's pretty much the C standard type for an unsigned integer that is guaranteed to be of the same size as a pointer, which is obviously very useful.

But generally-speaking, there are many reasons for using different names for integer types (which are often just typedef names for one or another built-in integer type). First of all, there are times when you should use the most "native" integer types (e.g., those that are best for the target instruction set, can represent addresses, or can be packed optimally in registers, etc..), and there are also times when you need to use integers with a fixed number of bits regardless of the target platforms. Remember, kernel coding involves a lot of fiddling with bits and tightly packed binary data structures, which is the kind of stuff where you must choose your types very carefully.

Another reason for this is that it is often very important, for optimizing performance, to carefully tune your data structures, including the size or …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We cannot decide for you, because we don't know what you want or what would make you happy.

Rubberman put it very well, computer science is science, and computer engineering is engineering. Don't expect a computer science degree to make you a proficient programmer. There are certainly some people that cross-over from CS to being a programmer, or that learn both in parallel, but don't buy into the common delusion about the two being the same thing.

The best analogy is in the difference between physics and mechanical engineering. The primary purpose / occupation of a physicist is doing research on investigating and discovering fundamental principles of the physical world. The primary purpose / occupation of a mechanical engineer is designing, building, and testing complex machinery. As it relates to CS and SE, just replace the "physical world" and "machinery" with "computer" and "software".

Both of these are very important and can be very exciting careers, if they are right for you.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There is definitely another step involved... I was also confused by this when I cashed out about 6 months ago. The problem is that I don't quite remember what that step was. I think it was about getting into your PayPal account and approving the incoming transaction... or something like that. Dani will probably come around to confirm.

Slavi commented: Thanks, looking forward to see if she replies =) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Those three move operations are for passing the parameters of the function. This is according to the calling conventions. It is only a bit special here because the function signature is such that all the parameters can be passed via registers instead of the stack, as is usually the case.

Here is a basic explanation:

  • movl $17, %edx : Passes the integer value 17 as the last argument to the function call (__ostream_insert), which is optimized by passing the value through the EDX register (a general-purpose integer extended (32bit) register often used for argument passing). Btw, the value 17 is the length of the string "I am initialized!", which is the required third parameter to the __ostream_insert function.
  • movl $.LC0, %esi : Passes a pointer to the string constant (marked by the label .LC0, as you can see in the .rodata read-only data section) as the second parameter, which uses the ESI register, which is a general-purpose pointer register.
  • movl $_ZSt4cout, %edi : Passes a pointer to the std::cout object, marked by the mangled external symbol _ZSt4cout (which will be resolved by the linker later), as the first parameter to the __ostream_insert function, which is a reference to a ostream object (C++ references are, of course, implemented as pointers). That pointer is passed to the EDI register, another general-purpose pointer register.
  • call ...__ostream_insert... : Calls the function, which just means that it does what is called a "long jump" to the execution address specified, which is, in this …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't really know why you'd want to install Android on a laptop, but I guess it's possible (with android-x86). I would recommend a proper desktop version of Linux instead, like some variant of Ubuntu for example.

Generally, to make place for a Linux installation, you would go to Windows and shrink the partition(s) so as to create 100GB of free space (leave it unpartitioned). Then, during the Linux installation (booting from a LiveCD / LiveUSB drive), you will reach a point (one of the first things to set) where you can specify where to install it, and at that point, you can either select to manually specify the partitions ("advanced") or you can set it to "use the free-space" (or something similar) which is going to partition your free-space in some default way that should work just fine.

For example, for Ubuntu, you can follow these detailed instructions, which are written for Windows 8 but are just the same for Windows 7. If you don't want to use a journaling file-system (the btrfs mentioned in that tutorial), you should select EXT4 instead of btrfs in the partitioning menu when selecting the format for the / mount-point.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You can easily translate the assembly that I posted into x86_64 assembly using as and objdump. Let's say you put the assembly listings in "foo.s", just do this:

$ as -o foo.o foo.s
$ objdump -d foo.o

And you get the following:

foo.o:     file format elf64-x86-64


Disassembly of section .text.startup:

0000000000000000 <main>:
   0:   48 83 ec 08             sub    $0x8,%rsp
   4:   ba 11 00 00 00          mov    $0x11,%edx
   9:   be 00 00 00 00          mov    $0x0,%esi
   e:   bf 00 00 00 00          mov    $0x0,%edi
  13:   e8 00 00 00 00          callq  18 <main+0x18>
  18:   bf 00 00 00 00          mov    $0x0,%edi
  1d:   e8 00 00 00 00          callq  22 <main+0x22>
  22:   31 c0                   xor    %eax,%eax
  24:   48 83 c4 08             add    $0x8,%rsp
  28:   c3                      retq   
  29:   0f 1f 80 00 00 00 00    nopl   0x0(%rax)

0000000000000030 <_GLOBAL__sub_I_main>:
  30:   48 83 ec 08             sub    $0x8,%rsp
  34:   bf 00 00 00 00          mov    $0x0,%edi
  39:   e8 00 00 00 00          callq  3e <_GLOBAL__sub_I_main+0xe>
  3e:   ba 00 00 00 00          mov    $0x0,%edx
  43:   be 00 00 00 00          mov    $0x0,%esi
  48:   bf 00 00 00 00          mov    $0x0,%edi
  4d:   48 83 c4 08             add    $0x8,%rsp
  51:   e9 00 00 00 00          jmpq   56 <_GLOBAL__sub_I_main+0x26>

And if you want it in PPC or any other architecture that is different from your host architecture, then you just need to specify the target architecture options for the assembly and disassembly, and you'll need to have them installed (basically, install the GNU cross-compilers for …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

well its not a homework question...it was in a paper

I'm curious what you mean by that, what kind of paper? A research paper.... lol... that's an amusing thought, that a research paper would contain a basic computer science 101 question.

but i want expert opinion on it...

No, you don't. When you want an opinion, you ask questions like "what are the trade-offs of ..." or "what's better, this or that?", i.e., you ask open-ended questions that demand an opinion. And when you want an expert to weigh in, you ask a question fit for an expert. What you asked was a novice-level question that was structured to trigger "item" responses, and those two things are very strong indications of being a homework / exam / whatever question.

You can't fool us, we've seen it all.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with the others that with a bit of effort and some good recovery tool, you can probably get back nearly all the data off of your drive. Because you only overwrote 1.4GB, it probably didn't touch much, except for some OS stuff. So, that means that probably all your personal data files will be recoverable, but the OS (Windows) will almost certainly have to be reinstalled, as far as I know.

Why do they nick dd the data destroyer? beats me...

Yeah... the dd command is the Linux command that strikes the most fear in me. Every time I use it, I check, cross-check and check again, then take a minute to breath, make sure I want to do what I'm about to do, and then, I check one last time, and then hit the enter key.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You also have to make sure that the name and signature matches that of the header file, you have to have exactly this in the source file:

double funtion1(double ,double ) {
  // ... the code
}
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You need to add each source file to the project (through the Codeblock's menus) and build them all together. Actually, it will cause each source file to be compiled individually, and then linked together.

If you want to gain a more complete understanding of this, you can read my tutorial about this.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I guess Tcll would be good too.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

what's the equivelant of that in either PPC or x86_64 binary opcodes??

Are you asking us to compile this Python code? You can use a Python-to-C/C++ conversion tool, like Cython, and then compile it with a C/C++ compiler. If you specify PPC as target, you'll get the PPC machine code. If you specify x86_64 as target, you'll get that machine code. Your question doesn't make sense if asked to human beings, this question is the reason compilers exist, that's their job.

how a class/object would work on a CPU before I can start finalizing the designs.

A class or object does not work any differently on a CPU than plain old C code. After compilation / interpretation / JIT'ing / whatever, the code is just plain function calls, jumps and operations on registers. The concept of a "class" or an "object" doesn't really survive passed the first pass of the compiler (the "semantic analyser").

I would recommend that you start by learning how to write object-oriented code in C. C is pretty much the closest to hardware language that is still human-readable and has a very straight-forward and obvious (and standardized) mapping to machine code. If you wonder how anything could be done in actuality, just try to write the exact equivalent code in C (which is always possible, but sometimes hard). Then, if you really need to know what it looks like in assembly, just use a C compiler and ask it to generate the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I love those benchmarks, they were very well done. I hope someone could redo this work on more up-to-date platform / compilers / etc... I was a bit suprised by the lack of competitiveness of Fortran (compiled with Intel) compared to C (with gcc). It would have been nice if they had also included benchmarks of C compiled with ICC. The compiler really matters a lot. Take for example Pascal which ranks really bad, even though the language itself permits pretty much all the same optimizations as C, but it is orders of magnitude slower because there hasn't been a good new Pascal compiler for decades.

As for Java, one thing that is apparent is that the memory overhead is huge. I mean, you constantly hear people complain about the memory overhead of C++ compared to C, which is about 1KB. It appears that the memory overhead of Java is anywhere between 25MB and 1GB of RAM. Python is also suprisingly high on that. I guess that the rule of the benchmark's implementations were to write the code purely in the language, not relying on bindings to C/C++ library code. That's where Python shines, i.e., the seamless integration with C/C++ library code, which makes most Python code just very thin (and easy / convenient) high-level code on top of a C/C++ infrastructure of library code.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There is only one version of Windows that matters now, which is Windows 7. Everything else is either deprecated (old, unsupported, dangerous to use) or really terrible (Vista, or Windows 8). So, that's all there is to it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I share that feeling that Java is on its way down. It is starting to feel more and more like a legacy language. I think that much of it has to do with promises that never materialized.

The promise of "portability through the JVM" is seriously undermined by a couple of factors. First, it was promised that eventually, any computing device would be powerful enough to accommodate it, but small resource-deprived and energy-efficient devices have proliferated, not diminished, with the advent of the "smart-everything" world. Second, the rapid and reckless evolution of the versions of Java has also caused a lot of portability hell. So, the end result is that in the "uber-portable" world of Java, you have to constantly worry about the JVM versions and subsets of your target platforms. By contrast, with native applications (e.g., C++), if libraries that you need are not already installed on a system, you just pull them in, which often can't be done with JVMs.

The promise of "no leaks via garbage collection" is also a failed experiment. Garbage collectors leak memory, that's just an irremediable fact, and diagnosing the root causes of those leaks is nearly impossible. The only practical solution for long-running Java applications (e.g., servers) is to periodically shutdown and restart the JVM. And there has been virtually no progress in this domain. But in the mean time, the competition has gotten so much better. On the side of native code (e.g., C++), coding practices, memory debugging tools, and code …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The support for pixel / vertex shaders has nothing to do with the motherboard, it has to do with the graphics card (or its GPU, to be more precise). You need to find out which graphics card (or integrated graphics GPU) that your computer uses. Then, you can look up what kind of shader support it has.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The best way to represent a graph in C++ is using the Boost Graph Library, such as using the adjacency-list class. The algorithms you are looking for are probably already implemented in the BGL, just look at the table of contents.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The -I options are for "include directories". There should be a setting in Xcode to allow you to specify a list of include directories, in which you would enter api and ../C/api (or as absolute paths). For the cpp files, you just add them to your project.

N.B.: I don't use Xcode or Mac, so I'm just giving general advice here, as most IDEs have the same kinds of menus and configuration options, so, it should be valid for Xcode too.

Otherwise, you can also use command line tools in Xcode, you just have to install them.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

BTW, I like the new social media buttons... except that they're too big.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@mike have you ever done GPU programming?

Not too much lately. In the old days, when I was first learning to program, I used to do a lot of 3D graphics game stuff (very amateurish, of course). But in those days, pixel shaders did not exist yet, so it was mostly software-based + fixed-pipeline (e.g., like doing the texture blending on the CPU and then feeding it through to the GPU on OpenGL's fixed pipeline functions). But I did manage to do a few neat thing even with only that, display-lists were pretty much the fanciest thing available (now they're pretty much obsolete, afaik).

It got a lot more fun when shaders started to appear, and I did dabble a bit with that, but things were still pretty basic back then (most graphics cards supported 2 multi-textures, maybe 4 if you were lucky, and GLSL was a pretty restrictive subset of C, and there wasn't anything fancy like render-to-texture or VBOs). But at that point, I moved on to other things like robotics, dynamic simulations, control software, artificial intelligence and so on... I don't really do 3D rendering directly anymore (for the few 3D stuff that I do, I use Coin3D, which serves my purposes just fine (I don't need fancy effects)).

I would love to do some GPGPU, but I just haven't found the time or purpose for it yet. Part of the problem is that those kinds of parallel computations are difficult to do because most …