mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The scale is in nano-seconds. So, this means that your clock's resolution 1 millisecond (or 1 million nano-seconds). That's not great, but it will be just enough for you.

I'm guess that you are using Windows, and that your implementation / compiler does not yet support a "real" high-resolution clock, so it just uses the system clock, which has a resolution of 1ms on many modern versions Windows.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

On most systems, there are a number of different accessible clocks, some more precise than others. And the C++11 standard reflects that with the use of std::chrono::system_clock::now() and std::chrono::high_resolution_clock::now(). The actual precision of those clocks is dependent on the system.

I'm afraid Ancient Dragon's statement is somewhat out-dated. For a long time, Windows had a fairly coarse system clock, at around 15ms precision. Now, however, the more typical precision is 1ms or 100us (micro-seconds). Similarly, Linux versions that are not ancient have a system clock set at either 4ms or 1ms precision, and you can configure that.

Furthermore, the standard high_resolution_clock is a clock that tries to exploit the finest possible precision that the system can offer, which is usually the CPU tick counts (which means nano-seconds on > 1 GHz CPUs), to provide high-resolution time values. These days, many computers and systems deliver nano-second "resolution" on that clock. This used to be unreliable, and still might be on some systems, because at that scale, the clock needs time to keep time, so to speak, and you also get issues of precision and multi-core synchronization. What happens is, the value given will be precise to the nano-second, but there can be some drift or readjustments. The intervals between those depend directly on your CPU's capabilities, but it's usually in the micro-seconds range.

For example, on my system (Linux), I get the following resolutions:

Resolution of real-time clock: 1 nanosec.
Resolution of coarse real-time clock: 4000000 nanosec.

The "coarse" …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is going too deep down a road I cannot follow. There is only so much I can do in terms of giving good advice on how to implement a terrible solution. When going down that road, at some point, there's a rupture, and I have to just say: Don't do that, don't try.

You seem obsessed with this, and I don't see any reason why you are doing this. Why would you want to use a MACRO (with all the problems that come with it) just to avoid a minor inconvenience (explicit template arguments)? The trade-off is terrible.

This whole thing just seems like a classic case of an XY problem.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The FILE* way of reading from a file is just the legacy that was inherited from C. The C++ language was developed with the aim of largely keeping compatibility with C (i.e., C is nearly a subset of C++, with only minor incompatibilities). This means that the entire C standard library is included in the C++ standard library, and in C++, they are preceeded with a "c", as in <cstdlib>, <cmath>, <cstdio>, etc... This is why there are a number of things that are repetitive. By and large, you should just try to stick to "real C++" libraries, like the iostream library.

Another example of that is all the string functionality which you have for C-style strings (char*) in the C <cstring> library, but you should prefer using the C++ string library <string> and the std::string class.

At this point (especially with C++11 additions), the only C library really worth using is <cmath>, which contains all those simple math functions like sine / cosine. Almost everything else has a better and easier-to-use equivalent in the C++ libraries.

And by the way, the code you posted is C code, not C++, although it is also valid C++ code (as I said, by compatibility). The C++ equivalent of that code is:

#include <iostream>
#include <fstream>
#include <string>

int main()
{

    std::ifstream file_in("myfile.txt");
    if( ! file_in.is_open() ) 
        std::cerr << "Error opening file" << std::endl;
    else
    {
        std::string buf;
        while( std::getline(file_in, buf) )
            std::cout << buf << std::endl;
    }

    return 0;
}
furalise commented: Very helpful answer.. Thanks +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Write an example of object-oriented programming in C, including dynamic dispatching mechanisms and run-time type identification. This is something people do often in C, and always in a clumsy, ad-hoc way.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The general approach to find the "range" of elements of the same value is to just look for the lower-bound and upper-bound elements. Like the standard functions std::lower_bound and std::upper_bound. The lower-bound is the first element that is not less-than the value. The upper-bound is the first element for which the value is less-than that element. In other words, if the array does not contain the value you are looking for, then those two elements will be the same, i.e., have a distance of 0 between them. And if the value is in the array, then you will know how many there are.

Modifying the binary-search algorithm to get either one of these bounds is very straight-forward, as shown in the reference pages. Also notice that the "lower-bound" element is also the first equal element, if there is one, meaning that it can be used as the primary algorithm too.

Generally, this is the way you would do it. However, another simple solution is to find an element equal to the value you are looking for, and then, see how many adjacent elements have the same value.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

First of all, from your algorithm, it seems that the array is sorted from greatest value to lowest value, right? Otherwise your if(array[middle] > value) has the wrong < / > sign.

What seems bizarre with your binary-search algorithm is that it doesn't seem to allow for or detect the possibility that the number does not exist anywhere in the range. First, a good binary search algorithm should detect that (and return an error). Second, this is usually indicative of errors in the logic of algorithm.

You have to understand that the iterations of a binary search algorithm always assumes that (1) the sought value is within the bounds (first,last), and possibly, (2) that the bounds have been checked. And generally, if you inforce those assumptions, you will naturally detect error situations, and avoid other special problems.

For example, when you change your bounds, with first = middle + 1; and last = middle - 1;, how do you know that you are not actually increasing the interval, or that you are moving outside the array? If the sought value does not exist in the array, that can happen with this method.

Here is an alternative implementation:

// search (very standard)
int bSearch(double array[], int size, double value)
{
  int first = 0;
  int last = size - 1;

  // first, check the interval:
  if( ( array[first] < value ) || 
      ( array[last]  > value ) )
    return -1;  // "not found"

  // then, check the bounds:
  if( array[first] …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Very interesting post. However, one thing that I noticed was that it doesn't seem like there is any incompatibility with the implementation of strtol and atoi. And so, I checked the GNU GCC's implementation of the standard C library, and here is what I found:

__extern_inline int
__NTH (atoi (const char *__nptr))
{
  return (int) strtol (__nptr, (char **) NULL, 10);
}

But, of course, that doesn't change anything of what you said, i.e., you should still just use strtol instead. But I would guess all implementations of atoi are just forwarding to strtol anyways. In other words, it's just a legacy name and specification for the same function, such that the old and unsafe code would still work.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I just replied. I initially thought that nullptr would come back around to answer.

why I do not get answer to my question???

That can vary. Sometimes the post is just too long. Sometimes you don't get the feeling that the original poster is really pull his weight (not making real efforts, just wanting to be spoon-fed the answers / explanations). Sometimes, like in this case, you see that another member (that you trust) has already started to "take care of that thread". Sometimes, it's not interesting to answer. Sometimes the question is unclear. Sometimes the question is just off-putting somehow (bad code formatting, unclear questions, or bad attitude). Sometimes you explain and explain and it seems to lead no-where, so you stop trying. There are many reasons why some threads go unanswered or people lose interest in answering it.

Remember, this is all just voluntary. Like in real life, if you want people to help you, you have to be nice, not too demanding, and just generally make it easy for them to want to help you, or keep helping you.

deceptikon commented: Well said. +0
Assembly Guy commented: Woo +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why on earth do you want to do this?

I don't get it. All your MACRO achieves is making everything cumbersome and unclear.

As far as adding some, virtual functions, I don't see why you can't:

#define EVENTS2(Y,X) \
    class X : public Y { public: \
    X(); \
    ~X(); \
    virtual void tests(); \
    }

class b
{
    public:
    virtual void tests(){};
    b()
    {
        tests();
    }
};


EVENTS2(b,a);

a::a()
{
    cout << "oi";
}

a::~a()
{
    cout << "bye";
}

a::tests()
{
    cout << "test";
}

It's that simple.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I don't think that MACROs can be wrapped in parentheses. The rules that the pre-processor works by are far simpler and crude than the rules by which the compiler works. The pre-processor is just a simple parser, a find-and-replace type of parser. So, the MACRO definition should be something like this:

#define events2(y,x) ..

Then, by convention, MACROs should always be written in UPPER-CASE LETTERS. This is a universal convention in C, C++, and any other language featuring a similar pre-processor feature. Do not break that convention. If you do, you will only attract the ire of 99% of the programmers who ever set their eyes on your code. I know that, personally, if I see code that breaks that convention, I will dismiss it, mock it, and shun it with extreme prejudice. In other words, use this instead:

#define EVENTS2(Y,X) ..

Then, I assume you have read somewhere that MACROs are dangerous because they can take almost anything as a "parameter" that is simply substituted into the MACRO's expression (in a "find-and-replace" style). And also, that MACROs are dangerous because they could be put in any weird places. And that one of the guidelines related to these two problems is to be very generous with parentheses, to make sure that whatever is provided as a parameter to the MACRO is going to be inserted between parentheses and that the MACRO expression as a whole should be wrapped in a set of parentheses (because of the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The search algorithm is pretty trivial, that's why you always get back to the "Binary Search Tree". Once you have a binary search tree constructed, searching for a particular element or range is a simple matter of recursively going down the branch which should contain the sought-after value depending on whether it's less-than (go left) or not (go right).

The real trick is constructing that tree, and especially, maintaining it (i.e., dynamically inserting and removing elements from it). This is where different binary search tree methods differ, like mainly the AVL trees and the Red-Black trees. This is where you would add some bits of information and/or design special algorithms to keep the tree reasonably balanced (all leaf nodes at roughly the same depth) after an insertion or deletion. If the tree cannot be kept balanced, the performance is going to degrade, and also, using a recursive search algorithm would start to be dangerous (stack overflow issues).

And then there are other kinds of binary search trees that are more complex because they don't involve a single-dimensional quantity (e.g., like a set of numbers or words in alphabetical order). But all the same principles apply, it's just that everything is more complicated, including the search algorithm (but still pretty straight-forward, compared to the create / insert / remove algorithms).

And then there are different storage methods (e.g., linked-structures, compact layouts, cache-oblivious layouts, or hybrids). But that is a separate issue (memory locality).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Each year your salary will be increased by some 8-10%

Yeah, right. I want to meet your employer. Yearly raises are more towards the 1-3% range.

after 30 years of life, what exactly we have in our hand ? Bank balance and experience of work in that company? Right ?

Hopefully, after 30 years of working, you will have accomplished some things that you are happy about or proud of. Whether you were on the assembly line of products that people enjoyed or needed, or whether you helped a lot of people, or whether you participated in some big and important projects. That's called taking pride in your work. So, that's one thing that you get, even as a "mere employee". And that's also why you should pick a profession that is fulfilling in that way, whether as employee or entrepreneur.

Also, a lot can change in 30 years. And at that point, your career or personal fulfillment ambitions might no longer be a priority. Priorities might be your children and/or leasure (i.e., retirement-style leasure). Running a business might actually be a burden at that point. I know my uncle seemed kind of relieved when his business went bankrupt in the recession, as he was close to retirement and his children were well off on their own. If it wasn't for the recession he might have had to keep running that business for another 20 years, out of duty to the employees.

So, things are more …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"Who can solve this algorithm?"

Hmmm... you maybe? Usually, when you get an assignment to do, it is because it is something that you could benefit from doing yourself.

We don't provide ready-made answers to assignment questions here, because it helps no-one to do so. And, we aren't tricked that easily either. You must show what you have done towards solving the problem and what specific problems are you stuck on.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think it's just a matter of you having fairly low counts and that you've had quite a bit of activity in the past week or so (lots of posts, and several "post comments" with reputation points associated). This can make your rank jump quite a lot. It's all relative. At your ranking levels, a reputation increase of 15 points can make you jump nearly 100 rank positions. And 10 additional posts can make that rank position jump by about 50 position. I think you just didn't notice it, because it jumped quickly.

I don't think that anything changed about the ranking system or points awarded.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem is coming from the fact that you delete the terrain pointer in the destructor, but that pointer points nowhere, i.e., you have not created an object to which it points to using the new operator. You can only call delete on pointers that point to memory allocated with new, which isn't the case here.

Also, your code is dangerous because you have not defined the copy-constructor and assignment operator. Read this tutorial.

And in modern C++, you should never really need to use delete, if you follow the guidelines in this tutorial.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Ahh... we were wondering what happened. Dani noticed that you deleted your account today, she was wondering why. I'm sure she'll be able to revert it. But of course, no one knows why you (or someone accessing your account) managed to delete your account without your consent.

<M/> commented: weird right? +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

why not a little [?] thing that you can click for a tooltip description

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This looks suspicious:

1.5 million new lines of COBOL code are written every day
5 billion lines of new COBOL code are developed every year

If I know math correctly, I would say that 1.5 million times 365 days is only about 550 million, i.e., about 10 times less than the quoted 5 billion. Either way, one of those statements is BS, maybe both.

And also, adding this statement:

An estimated 2 million people are currently working in COBOL

Are we to believe that COBOL programmers average less than 1 lines of code per day? I knew COBOL was verbose / tedious, but that's a bit crazy!

And I also don't know how easily you can compare COBOL lines of code with lines of code in other languages. Every example of COBOL that I have seen seem to suggest that it takes roughly 4 times more lines of code to write something in COBOL than you would need in a more modern mid-level language like C++ / Java / C#, and certainly far more than is needed in high-level languages like Python and the like.

And then, I'm always skeptical of statements like "COBOL applications manage the care of 60 million patients every day". These statements always seem to imply that the entire software stack for a particular domain is done in that language, which is obviously not true. Any large system involving multiple computers, their OSes, networks, database servers, communication systems, end-user applications, and …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

but in these case i belive that i can't use the 'const':(

That's right, you can't use a constant because std::endl is actually a free-function which gets implicitely converted to a function pointer, and then, within the stream's << operator, it is called on the stream. Anyways, long story short, std::endl is not a constant variable, so, to rename it, the easiest is probably a #define. The other option would be to use a wrapper function:

std::ostream& NewLine(std::ostream& out) {
  return std::endl(out);
};

which will allow you to write things like:

std::cout << "Hello World!" << NewLine;

But, if you ask me, this is all just a waste of time, just use std::endl and forget about it. Renaming standard things is generally just a bad idea, every single C++ programmer in the world knows exactly what std::endl is, you don't need to find a "more obvious" name for it, it's already clear enough to everyone concerned.

what isn't right with that line?

That line seems correct, as far as compilation goes, the define should work for your purposes. The two problems with that line is (1) that it is useless, as I already explained, and (2) that it breaks the MACRO convention that they should always be entirely written in upper-case letters to make it clear to all that it is a MACRO and not something else. That's all that is wrong with it, beyond the fact that MACROs are not to …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I guess modern is relative, if you are used to using vi, vim or emacs for all your coding needs (as many many people do), then, in comparison, sublime-text looks like an ultra-futuristic spaceship (not Xenu's spaceship that "looks exactly like a DC-8").

I tend to prefer when applications stick to the appearance that is standard in the host system. If I want a dark theme, I can set a dark theme on the system, I don't like to have to set themes / color-schemes for each application individually. And also, I have key-combination shortcuts to switch between different themes (useful when light conditions vary). Anyways... enough ranting about that.

I've been using sublime-text for a little while now, on and off. I'm not convinced (yet). I guess I'm too conservative. And I tend to dislike tools that require me to learn all about them, it often feels like the tail wagging the dog, or doing years of training in Kung-Fu in order to break a cinder block, instead of just grabbing a jackhammer and moving on. I just want to write code, and have pleasant visual cues from the syntax highlighting, that's all. I like tools that learn about what I'm doing (what language, what code, etc.) and adapt to that, not tools that demand that I adapt to it, whenever possible. That's why I'll probably have trouble warming up to sublime-text. Although, I do understand its appeal.

And I'm also weirded out by their license structure. They seem …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1 - why some C++ PRO don't advice me use these code?(making properties)

Properties, like they exist in Delphi and C#, are certainly nice and convenient, but not an essential part of any language. I mean, they don't enable any particularly interesting coding techniques or patterns, i.e., they just make the code that uses the property a bit nicer and more uniform. There are some encapsulation benefits to properties, but nothing ground-breaking either.

In C++, you can achieve a very basic implementation of properties, either mimicking Delphi-style properties (with a property class template) or C#-style properties (with nested classes), as seen here. But in all cases, the basic mechanism used to "look like a variable" is the combination of an assignment operator (for writing) and an implicit conversion operator (for reading). This mechanism does a decent job, and is used for many "value-wrappers", such as my "award-winning" lockable class template. But, the result is definitely not perfect, and there are definitely compromises to be made with regards to how good things will look at the places where the properties are used. And since the whole point of properties is to achieve a perfectly uniform syntax where you use them. Any compromise is unacceptable.

The bottom line is that if you can't achieve an implementation of properties that give you something more intuitive, clean and painless than a "standard" good old-fashion get/set pair (which everyone is used to and familiar with), then it's not an …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

As their name implies, they shift the bits to the left or right. In other words, the bits are moved to the left or right by the given amount of positions. If I take the number 13, I get this:

For left-shift:

13      ==    1101   ==  13
13 << 1 ==   11010   ==  26
13 << 2 ==  110100   ==  52
13 << 3 == 1101000   == 104

And for right-shift:

13      ==    1101   ==  13
13 >> 1 ==     110   ==   6
13 >> 2 ==      11   ==   3
13 >> 3 ==       1   ==   1
13 >> 4 ==       0   ==   0

I don't think there is a way to put it more clearly than that.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

is there a possibility to write such a common compiler?

Most compilers out there are written as a front-end and back-end duo. The front-end takes care of parsing the language, applying syntax rules and converting all the code into a simpler kind of code (usually something close to the C language). Then the back-end takes that intermediate code and does all the final optimizations and the assembly into machine code that targets a given architecture (CPU). Most compiler suites share the same back-end for all languages, and simply provide a different front-end for each language. This is the case for GCC (GNU Compiler Collection), ICC (Intel Compiler Collection), Clang / LLVM, etc... So, yes, it is possible to write such a "common compiler", in fact, it is the only viable solution in the long-run if you want to support multiple languages. But, of course, it makes no sense to have one front-end that can deal with all languages, you just have one separate front-end for each language, all using a common back-end.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I haven't used DVD-rippers in a while. But a few years back, before I temporarily moved to Europe, I had a bunch of DVDs, bought in Canada. I could have brought them all with me (as a disk stack), but even then, I could not watch them, since my only movie-watching-device would be my laptop, and DVD players (the hardware, I mean) must be locked to one geographical zone (North America, Europe, Asia,...) with a very limited number of reconfiguring allowed (I could only change the zone 5 times in total). This would mean that any DVD I bought or rented in Europe could not be played on a DVD player configured for North America, and vice versa. This is obviously a market protectionist policy by the movie industry, to avoid competition between world markets. Anyways, this meant the only way for me to enjoy the products that I had legally procured for myself was to "rip" them to remove the encryption and any other DMCA crap. If some Hollywood law firm wants to argue that this is illegal (i.e., enjoy the things you paid money for), good luck to them, they'll need it (or, they'll need to buy politicians and judges, which is probably easier and much more common).

At that time, I just used the good old "DVD Shrink" software, works like a charm. I had to run it under "XP Compatibility", as it cannot run natively in Vista and later.

VLC can also rip DVDs.

For those …

LastMitch commented: Correct Answer +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The "Myths of Islam" that AD referred to is the title of the webpage he gave a link to. And if you read that page, you'll see that the "myths" that the title refers to are the myths about Islam, i.e., the things people believe about Islam that aren't true, or people's erroneous preconceptions about Islam. So, the word "myth" is perfectly appropriate, and shouldn't offend anyone, as it is not calling any of the beliefs of muslims as being mythical.

Please don't refer to them as myths, it kind of is offending. No one should refer to a faith as a "myth", it kind of is disrespectful...

I call all faiths myths. If you are offended by that, it's your problem. Free speech means that some people will express opinions and beliefs that offend or irritate you, it's your responsibility to learn to deal with that (it's part of being a grown-up), it's not their responsibility to work their speech around your particular sensibilities.

And if you want to argue things related to Islam, I would suggest you either start a new thread or revive the very interesting discussion we had a while back.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

For the compiler to be able to inline a function, it must see its definition (implementation). This means you cannot do the classic "declaration in header" / "definition in cpp file" paradigm. The definition must be in the header file. Marking the function as inline or __forceinline tells the compiler that this is what you will do, thus making the functions candidates for inlining.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm afraid your question is too unclear. Please give an example of what you would like to achieve.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Classic intelligence failure

I guess that's a mild way to put it. Even if you presume there was no nefarious aspect to it, it is still grossly incompetent. Intelligence has two aspects: the information gathered and the level of confidence about it. Getting the information wrong is not a failure by itself, but if you estimate that you have a high level of confidence about that information, and the information turns out to be wrong, then you are grossly incompetent, and this is a monumental failure. The fact that the "intelligence" about WMDs in Irak was presented as being of high enough confidence to warrant a preemptive war points to either a monumentally incompetent staff throughout the intelligence-gathering agencies, or a nefarious few individuals who misrepresented the facts to promote their agenda. With all the evidence, I have trouble dismissing the latter hypothesis.

As to Syria, guess where Saddam shipped a lot of his stuff when things started heating up in 2002?

I highly doubt that. First, most evidence points to Saddam not really having any significant amount of WMDs since the end of the first Gulf war. Second, if you think that Syria and Irak are two countries that were likely to cooperate with each other around 2002, then you are grossly misinformed about the politics of the middle-east.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I thought * value is the number the pointer value points to

Yes, but in order for a pointer to point to something, you need to allocate some memory and set the pointer to point to that memory. Before you initialize a pointer, it just has some arbitrary value and thus, points to some random memory location, which is why you get a seg-fault when you try to read/write that memory (which is most likely not part of the memory dedicated to your program).

To allocate memory, you should do something like this:

value = new int;

after which the pointer value will point to one integer. After that point, you can write its value with a statement like *value = intVal;. So, each of your constructors need to make such an allocation with new before it proceeds to set the value.

For a more comprehensive tutorial on writing all those functions when holding a resource, like dynamically allocated memory, refer to here.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You should also remove the using namespace std; from your header files. It is not acceptable to have using statements in header files. This is a rule to which there are no exceptions, period. You need to use std:: for each standard library component you use in your headers.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So any conclusion for this? Did you just say impossible?

Nobody's talking about it being "impossible", everything is possible. What deceptikon said is that it currently does not work that way (not in C++, and not in any other language that I know of), and that making this work would be really hard (not impossible).

But one thing that needs to be clarified is that what you propose is, in general, a really bad idea, and will lead to far worse performance. So, this kind of feature would mean lots of pain and for no gain (in fact, tremendous losses in performance and effeciency).

Here's why. Currently, if you have a simple C++ class Vector2D with two data members (x,y), each being of type double, then the memory layout is something like this:

****************
|  x   ||  y   |

where each * represents a byte (i.e., a double is 8 bytes). So, the total size of the class is 16 bytes. If you want to access the data member "x", you just look at the double value at the address of the object. If you want to access "y", you just look 8 bytes further in memory. This also means that if you have 100 objects of that class in an array, all the of data contained in all those objects are within a single chunk of 1600 bytes, which is easily loaded on cache memory and easily and efficiently traversed and operated on.

If you …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

They are both correct, although people would generally write them as O(n) in both cases, because the 2 in the answer to (b) is not meaningful.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

where they have all this awesome software and I always wonder what it is.

3D animation / modeling is usually done with a professional suite of 3D animation software, and often in combination with in-house software / add-on scripts. The most popular software, from my limited and somewhat out-dated knowledge, is Maya / 3D Studio Max. But if you want to try your hand at this, I would recommend going with the open-source Blender software, which many people use, even for "real" projects because it is very feature-rich and easy to extend with in-house features or scripts. Blender is probably as close as you can get to professional 3D modeling software without having to significantly lighten your wallet.

You can also take a look at the wiki list of 3D modeling software. But you should understand that this is a very broad category, ranging in features (basic to professional) and application area (e.g., engineering drawings, computer games, or Pixar-style animations).

Also, understand that a lot of computer game companies have a significant amount of expertise in 3D graphics and also have a lot of custom rendering code and custom modeling formats that are intimately tied to their game development. This often means that in-house tools / add-ons can be very easy to make and effective to use, and probably not the kind of software they would consider selling or sharing with the world (and their competition!). Blender was originally just …

aVar++ commented: Very helpful reply! +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

anyone, this is from second year of IT, anyone passed that?

Seriously? What did you do the first year? Make macaroni art?

From the example:

if (x == 0) then  // O(1) simple variable access
      for i = 1 to n do // O(n)
           a[i] = i; // O(1) simple variable access

You just multiply every nested thing, so, you get:

T(n) = O(1) * O(n) * O(1) = O(n)

That's it. Any constant factor disappears because they are unknown anyways, so, answers like O(2*n) are impossible, it would just be O(n). And any terms that are smaller than the dominant term also disappears from the final answer (although sometimes keeped as intermediary result to show sub-complexities). For example, if you have this:

for i = 1 to n do  // O(n)
    a[i] = i;  // O(1)
    for j = 1 to n do  // O(n)
        b[j] += a[i];  // O(1)

Then, you get this:

T(n) = O(n) * (O(1) + O(n) * O(1))
     = O(n + n^2)
     = O(n^2)

because a quadratic term (O(n^2)) grows much bigger than a linear term (O(n)), and thus, "swallows" it when n is large.

Overall, this kind of stuff is very simple. For every loop, you just figure out how many times (on average) the loop will have to execute (e.g., in terms of the number of elements in an array, or some other quantity like that). And, for whatever is just a …

NathanOliver commented: Give them a link and I swear they just think you made the text blue. +11
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

multiply by 2 to the power of 15?

2^15 == 32768, I thought that was obvious.

All I did in that code was replace a line of code by an abstract function call, it was just to make you think about it at an abstract level, instead of just focusing on the details. You look at the initial piece of code and see a "division by the maximum value", and then, you wonder how to do the same when the type is double instead of short. What I want you to see in that piece of code is a "normalization to a range of [0,1]", and then, ask yourself what would be a meaningful normalization for your problem domain. For the author of the code, in his problem domain, the meaningful normalization was a conversion between the range [-32768, 32768] to [-1.0, 1.0]. What is yours?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The problem with using the maximum value of a double is that you will most certainly lose all meaningful value in your samples array. For example, if you do samples[i] / std::numeric_limits<double>::max(), then it won't really matter much what the value of samples[i] is, unless it is extremely close to the maximum value. What you'll get is just a number extremely close to zero, within the number of significant digits. So, I don't think that such a strategy would make any sense. You need to rely on a meaningful upper-bound to your sample values.

When I see this code ABS(samples[i] / 32768);, to me, this looks like just a normal ADC (Analog-Digital Conversion) sample conversion to a floating point value normalized to the 0-1 range. This is because ADC units take an analog signal and convert it to a digital signal of a certain number of bits N, and thus, to an integer value within a range [0, 2^N]. To map that integer value back to the range of the analog signal (0 to 5 Volts, or just 0 to 1 in normalized units), you have to perform a very simple conversion, like samples[i] / 32768.

So, you have to see that function as doing this:

public void StaticCompress(short[] samples, float param)
{
  for(int i = 0; i < samples.Length; i++)
  {
    int sign = get_sign_of(samples[i]);
    float norm = normalize_abs_value( samples[i] );
    norm = 1.0 - POW(1.0 - norm, param);
    samples[i] = denormalize_value( norm * sign );
  }
} …
DavidB commented: Thanks for the input Mike. Your posts are always thorough, going above and beyond simply answering the question +5
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The post quality score is definitely the best indicator for how worthy and awesome a member is. Shamelessly coming from another 97% scorer ;) (I'm sure JorgeM agrees with me!)

... says the person who has a 97% post quality score ;)

... says the person who has a puny 90% score ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

A heap corruption problem is almost always caused by calling delete on a pointer that does not point to memory that was allocated with new. And yes, the strcpy function does not allocate memory, it only copies data from one location to another, it is your job to allocate the memory and make sure there is enough of it to perform the entire copy. The function that actually does both an allocation and a copy is called strdup (for "string duplicate"), but be aware that this function uses malloc for the allocation, and thus the memory must be freed with free (and also, strdup is not a standard function, it is a POSIX-standard function and is available on most platforms, including Windows).

Another thing to be careful about is that C functions like strcpy are in the std namespace, like all other standard C++ library entities. Some platforms, including MSVC, have them also in the global scope, but for maximum portability you should get into the habit of always writing either std::strcpy or with a using statement (like using std::strcpy; or using namespace std; at the start of the function's body).

i was thinking of switching to std::string to get loose from all this char* crap and reimplement the class, but how would it be possible to use char* for this?, i am curious

This is a bit of a problematic question, because you are shutting down the answer as part of the question. The reality is, in …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You must be very tired indeed. ;)

When you construct the object "w" of class Something, it will contain a data vector with a size of 100. When you construct a second object "f" of class Parser, it will contain a data vector with a size of 0, because that's the size of an empty vector. The constructor of the class Parser does not initialize the size of the data vector within the object under construction to anything different than the 0 that it starts out with.

You almost seem to be confused about the difference between an object and a class. Each separate object contains its own set of data members (as declared in the class declaration). These data members are not shared between different objects, regardless of inheritance relationship, or even if they are of the same class. The data members that are tied to a class (instead of an object) are called "static data members", and must be declared with the static qualifier.

kal_crazy commented: very good explanation :) +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Probably not the best implementation, but, hey! If anyone has any more, feel free to post :)

Definitely not, especially considering that it has a blatant memory leak.

Overall, I have a really hard time understanding what you actually want to accomplish. Even the title is confusing "CRTP without template arguments", in other words, "Curiously Recurring Template Pattern without templates", normal people would just call this "inheritance".

If what you want to do is somehow return a reference to an instance of Parser class via the Signal::Parse() function, and then copy that into a Signal object, then all you are doing is slicing an object. It doesn't "feel" right, to say the least.

I've read your posts over and over and I really can't understand what you want to the behavior to be. It is hard to understand behavior when all the functions are empty and thus, whatever happens, the effect is naught. Please provide a more concrete example, with data and observable behavior.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You can always count on good'ol jwenting to come out of the woodworks and intervene in any rational discussion, and take it all the way to crazy town.

IMO there's no such thing as "unacceptably rich". The whole idea that people should be kept poor

There is a middle ground between "unacceptably rich" and poor, you know? For example, you could double the income of nearly half of americans if you seized the income of the 400 richest individuals and redistributed it. This would literally eliminate poverty in the US. It's not about keeping anyone poor, in fact, quite literally the opposite.

Potential investors are taxed to the point they have no money to invest

1) You are probably taxed at a much higher percentage than any potential investor.
2) Investors are only taxed on captital gains or interests (and at a very low rate), meaning there is no adverse tax-consequence from investing the principal. And the alternative (not investing) is much worse, even as far as taxation goes.
3) "Never did anyone mention taxes as a reason to forgo an investment opportunity that I offered," -- Warren Buffett, while dismissing this argument that he qualifies as a crazy fantasy.

The Gold standard for currency worked well, until it was abandoned by countries needing a quick cash infusion for their governments that wanted to go on a spending spree for which their gold supplies lacked the backing volume.

Not at all. The …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

1) there's no evidence chemical weapons were used in combat, let alone by government forces, let alone under orders from the president.
2) there's actually strong indication that the victims are the victims of the rebels (al qaeda that is...)

I should have made it clear initially that my statement was under the assumption of "if we get definitive evidence that Assad did it". As I stated in later post:

"As far I know, there hasn't been any proof at all. The media doesn't really seem to care either way. From what I have gathered, it seems the only definitive fact is that a lot of people died suddenly. Cause unknown. The rest is speculation."

3) even if the Syrian government were responsible, Syria is no signatory of any treaty barring the use of chemical weapons, therefore can't be held responsible for violating such a treaty.

Yes they are. They signed the Geneva Protocol on December 17th, 1968, prohibiting the use of "Asphyxiating, Poisonous or other Gases, and of Bacteriological Methods of Warfare".

4) it's an internal affair in an independent nation. Why are we so upset about this when there's a rumour that someone used weapons against civilians that quickly killed a few hundred but when there was a massacre in Rwanda where millions were slaughtered with machettes and other crude weapons the world stood idly by and let it happen.

Technically, violating an international law is an affair of international law, …

<M/> commented: How do you write so much!! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You want to know the reasons behind currencies going up or down in general?

To answer that, you have to understand what money is. Fiat currencies (which are most currencies today) are records of debts that the treasury (government or a major bank) has incurred with the central bank (the bank that issues the currency). In other words, it is what we commonly call an "I-O-U", meaning that a 10 dollars bill means that someone borrowed 10 dollars from the central bank at some point. Meaning, someone has a debt to repay, and can only repay that debt in the same currency it was incurred (i.e., if I borrow something from someone, I must give back that same thing, not something else). This means, someone "wants" that currency, and if someone wants it, someone is willing to trade something for it (work, goods, etc.), and that is what gives the currency its value (i.e. just basic supply and demand). And the entity most interested in a currency is the state(s) to which the currency is tied, because they are the biggest original borrower(s) of the currency.

Once you understand this, you can easily figure out how and why the value of currencies go up or down.

As any record of debt, its value is measured by the trustworthiness of the borrower(s). If a borrower defaults on his loan (i.e., bankrupts), he is no longer looking to repay the debt, which lowers the demand for the currency and thus, its value. …

GrimJack commented: Very well said! +0
<M/> commented: How do you type the much in every question! +0
nitin1 commented: same question as from <m/>, do you have any software to do that ;) well said!! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Microsoft Visual Studio - Don't try to run it in Linux, and there is no point in doing so. If you want to program in Linux, you need to use tools that are native to Linux, there are no ways around that. Visual Studio is for Windows development only, there is no point in using it for any other purpose or on any other platform. Popular IDEs in Linux include KDevelop, CodeBlocks, Eclipse, Geany, Sublime-text, NetBeans, etc..

Adobe Products - This is the most typical thing people try to run in Linux because Adobe does have some very good products (Dreamweaver, Photoshop, etc.) and the competition from the open-source world is not quite up to it, yet. Generally, I think Adobe software can be run in Wine without too much trouble, but it could lead to a few quirks or glitches when using it. But you might want to explore the alternatives first. Here is a list of open-source alternatives to Dreamweaver, many of which will be sufficient for most ordinary tasks. As for Photoshop, the alternative is GIMP, which is a long-standing open-source image editor / manipulator project and is now a very rich piece of software (with probably far more features than Photoshop), and the only criticism about it is that it isn't as user-friendly as Photoshop, but still, give it a try. Most of these alternatives can be found directly from your "Software Center" (or apt-get commands).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

How many use it on Linux

I've never used it. I keep Windows around on a dual-boot. And the only things that I cannot get in Linux are things I wouldn't try to run in Wine, or not worth the effort. Mostly, I use Windows for either computer games (for which I don't mind rebooting), or advanced engineering software that I definitely wouldn't even try to run under Wine. I have not found anything that I so desperately needed and didn't have a good-enough version or equivalent in Linux.

So, that's my two cents.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Qt is definitely the best, IMHO. It is not true that you need another IDE for using Qt. But it is true that it is easier with certain IDEs that have a tighter integration with Qt. This will be true for any GUI library out there.

As far as I know, the best IDEs for working with Qt are Qt Creator, Visual Studio, and KDevelop. Qt Creator is the IDE provided by Qt, and it is a pretty good IDE. Visual Studio requires that you install a Qt plugin for it, and I think it also works for the free versions of Visual Studio (the "Express" versions). KDevelop was developed in Qt and largely for Qt development (for the Linux KDE environment), so, it is well integrated with Qt out-of-the-box.

Otherwise, it is possible to use any other IDE and do Qt stuff. However, you will need not be able to use much of the traditional things you use an IDE for, because you will have to setup the build script in qmake, build your projects through the command-line (or by setting up the custom commands in DevC++), and so on. At this point, the IDE will be little more than a text editor. This is pretty much true of any other GUI library because they are generally too big and complex and require a number of extra build-scripts and things like that.

Another option that is similar to Qt is wxWidget. And the other options are mostly tied to …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The paragraph is very poorly written. I understand what it is referring to, but the wording of the text is horrible and imprecise.

What it is referring to is the fact that template instantiations have internal linkage. This is to guarantee link-time satisfaction of the ODR (One-Definition Rule).

Normally, if you have a function declared in a header and defined (implemented) in a cpp file, then you must compile the cpp file once (as an object file) and link it to the rest of the object files to create your final executable (or dynamic library). If you were to link that object file twice, or have multiple definitions of the function in different source files, the linker would give you an error stating that you have multiple definitions for a given symbol (function-name).

With templates (function or class), the definitions usually appear in the header files, and the template is instantiated for each context in which it is used (i.e., as "needed"). This means that the same template instantiation could appear in multiple object files (from different source files that happen to contain the same template instantiations), which would seem to contradict the one-definition rule. To solve this issue, the compilers treat template instantiations as having internal linkage, meaning they are not "exported" from the object file, and thus, do not clash between each other when you link them together.

Note that many compilers do not exactly obey that rule, as often they do export the template instantiation symbols, but …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Why do you want to install a windows program onto Linux? Chances are, there is a Linux version of the software or an equivalent application for Linux. What is the software itself? Maybe we can suggest a Linux equivalent.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

One learns to play a piano well only by years of practice -- same with progamming languages.

Good analogy. I'd add to it: One doesn't learn to play a piano well by only playing the same tune over and over again.

You have to find new challenges all the time to progress.

ddanbe commented: Well said! +14