mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Reverend Jim thought that you were talking about 4K as in 4 kilobytes (or 4KB), and by that measure, the size of a 4KB file is, well, 4 * 1024 bytes. He simply did not realize you were talking about that new ultra-high-definition format for video and televisions that is commonly called "4K", which refers to the fact that it has 4 times the resolution of 1080p video.

Since 4K is 4 times more pixels than 1080p, then a rough approximation of the file sizes will be 4 times that of an equivalent 1080p video file. But, of course, most video compression methods aim to achieve as much compression as possible without negatively affecting the quality of the picture, i.e., it merges frames or parts of the frames that are static of several video frames (e.g., a static background), and also compresses groups of pixels that have nearly identical color values (e.g., making bigger pixels). So, when the resolution is as high as 4K, there is probably a lot more opportunity for compression too, i.e., when the resolution is higher quality than what you can perceive with your naked eyes, it can compress a lot before you notice any difference. So, I would say that it might not be as much as 4 times the size of 1080p video, maybe as low as 2 times bigger, or even less, depending on the encoding quality settings.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

you mean their internal representation is in adjacent format.therefore no matter what character coding we are using because every character encoding sets alphabets in a sequence. correct ?

Correct. I'm not sure if it is absolutely guaranteed, but all encodings that I have ever heard of has this property (letters in alphabetical order and digits in order 0-9). Someone would have to be pretty insane to come up or use an encoding that does not have this property, because many functions rely on this, like doing string to number conversions (and vice versa), and things like turning all characters into upper-case or lower-case. So, if an encoding existed which did not have this property, most of that code would be broken (not working properly).

How to read last 3 characters in a line(i want to read digits recursively in a line) ?.I have no clue about how to set the file cursor to the end of the line and then start reading.

For that kind of task, it's preferrable to just read the entire line into a string and then deal with the characters in the string. C++ strings (the std::string class) are just arrays of characters with many more features for string manipulation, like extracting a sub-string (part of the string), finding specific characters, etc... You can read an entire line with the std::getline function (see ref), like this:

std::string line;
std::getline(readob, line);
std::string last_three = line.substr( line.size() - 3 …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

doesn't if(--count) is equal to if(--count == 0)

No, they are exactly opposite. If you having any integer or pointer variable, call it a, then writing if( a ) is equivalent to writing if( a != 0 ), because any non-zero (or non-null) value converts to true, so, if(a) is true as long as a is not equal to zero.

I thought the author just want me to implement it just to test my basic template building skill then apply it to a designated container.

Or maybe the author just wanted you to discover the issues that you are discovering right now. If that's the case, then that author is smart, because the best way to learn is not to be told step-by-step how to do things, but to discover the issues and search for ways to solve them on your own.

Yet, you just make me realized that I still have bad habit and prediction skill, such as 3, 4, 5, and 7.

That's alright. Bad habits die hard, but the earlier you become aware of them, the better.

1,2 6, and 8 is intentional(too lazy, use anything as long as it is "working").

Well, 1 and 2 are not really optional. Not having those constructors / operators is a bug, and of the worst kind. The worst kind of bug is when it still is "working", or so it seems, but it is actually doing the wrong thing, and silently …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

don't work also, I end up defined it inside of the class itself.

Yeah, that's why I wasn't sure (without checking), because I always just define it within the class.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You cannot do a forward declaration of a class outside of its namespace, you need to do this:

namespace std {
template <class> struct hash;
}

I think that will solve it.

Also, I'm pretty sure you need the template <> on the definition of the hash's operator() for your full specialization.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Ok, about the error, it's very simple, I should have caught it earlier... You need to using new to allocate an object to pass to the shared-ptr:

return shared_ptr<T>(new T(std::forward<Args>(args)...));

in your make-shared function.

But looking at your shared-ptr class, there are obvious problems:

template <typename T> 
class shared_ptr {
  public:
    shared_ptr() = default;
    shared_ptr(T* point): p(point) { ++count; }
    shared_ptr(T* point, std::function<void(T*)> rem):
        p(point), del(rem) { ++count; }
    shared_ptr(const shared_ptr<T>& val):
        p(val.p), count(val.count), del(val.del) { ++count; }

    T& operator*() const { return *p; }
    T* operator->() const { return & this->operator*(); }
    std::size_t use_count() { return count; }

    void deleting()
        { del ? del(p) : delete p; }
    ~shared_ptr() { if(count) deleting(); else --count;}
  private:
    T *p = nullptr;
    std::size_t count = 0;
    std::function<void(T*)> del;
};

Here is a few problems that immediately pop out:

  1. No copy-assignment operator.
  2. No move-constructor and move-assignment operators.
  3. Single-parameter constructor not marked with explicit.
  4. The test if(count) should be if(count == 1) (or, use if(--count == 0)).
  5. The reference count needs to be a shared state between all shared-pointers that point to the same object. Just consider this situation ("sp" for shared_ptr<T>): sp p1(new T()); sp p2 = p1; sp p3 = p1;, which will result in p1 having a count of 1, p2 and p3 both having a count of 2, and the object will be destroyed as soon as p1 is destroyed, regardless of the situation, i.e., there is, in effect, no reference counting.
  6. You have a lot …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

We need more detail. You say you are using a custom version of shared-ptr, and the error seems to be in that custom class. So, we need to see that code, otherwise there is really no way to tell where the error might come from.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

This is indeed an interesting discussion.

There's a lot of talk about "speed" here, but that is rather vague and inaccurate. What makes RAM "fast" is the low CAS latency. It is not so much a matter of bandwidth (bytes per second), it is a matter of response time or latency, i.e., the time it takes to obtain a read-out of memory at a particular random address, from the request to the reply. In a computer, different interfaces (i.e., connectors) are designed for different purposes, with very different requirements. The memory controller and the system bus are designed for very low latency (i.e., very responsive) exchange of raw memory between cache and RAM. Things like PCI buses are designed for extensible and convenient communication between peripheral devices, which is not very efficient at data transfer, it's more for commands (issued by drivers) to operate the hardware. And SATA or other HDD interfaces are tailored for bulk high-bandwidth data transfers, which also comes with a lot of latency (e.g., software, firmware and hardware caches, buffers, etc..), in other words, you can transfer a lot of data very fast, but transfering a very small chunk of data takes a very long time (relatively-speaking). USB connections are somewhat in between, as they were originally just an upgrade of serial ports (or TTL) which were mostly for sending commands to devices, not for data throughput, but they have now been bumped up to higher throughput (with versions 2.0 and 3.0) to accomodate …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Given that the biggest vulnerability in any network is the human beings using it (e.g., not protecting physical access to key machines, leaving for a break and leaving the computer logged in, using simple passwords or none at all, visiting dubious websites while at work, etc..), I would fear that any autonomous network defense software, if very clever, would deduce or learn that best way to protect itself is not letting any human being near it, rendering the whole network useless to the people that are meant to use it. Basically, like HAL, i.e., shut the humans out to minimize the threat to the "mission".

1) Neuter your system. Eliminate the noisy pieces, simply assumptions, and design for a mathematically precise environment.

2) Expect you will have faults. Design for the elements that occur every day in practice.

I would say there are many more options than that. Those options you mentioned are what I would consider parametric approaches, in the sense that it tries to model (or parametrize) all the possible faults and then either eliminate them from the analysis / experiments (1) or design to avoid or mitigate them (2). Typically, in research, you start at (1) and incrementally work your way to (2), at which point you call it "development" work (the D in R&D). But approaches that have had far more success in the past are the parameter-less approaches. The idea there is that you don't try to understand every possible failure, you just …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that ads are annoying everyway they appear (eating up the whole show and eating up the bottom half of the screen). I think this paradigm will have to shift soon because everyone now has a DVR and barely watch anything live anymore, and even if you want to watch a show live, you start recording it, then spend the first 15 minutes watching a sitcom episode from your bank of recorded show, and then you start watching the pseudo-live show, skipping all the commercials. The point being that even thought a show can get great ratings, no one watches the commercials. And then, you have to add to that all the on-demand services (netflix, etc.).

I also agree with the hate of the repetitive nature of all these reality shows, where the spend more than half of the time of the show just previewing what will come up next in the same show. By the time you reach the climactic end scene, you've seen about 5 previews of it. They just stretch things out by previewing the same stuff again and again. The worse is when they preview the same "season climax" event again and again, several episodes before the episode when it actually happens, always hinting that it might happen in the current episode.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I got 16/21, which is not too bad. In the five I missed, two were expressions I had never heard before and involved words I didn't even know (e.g., pang), then one of them I just went for the grammatically correct option while the correct expression was grammatically incorrect (as expressions often are), and the other two mistakes were toss-ups and I learned something (e.g., I never thought an Irish folk dance made any sense in that expression!).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

it should print 10(1+2+3+0+1+2+0+1+0)

So, you want to add together the first three digits of each row. Right? I say that because NathanOliver's code assumes that you want to interpret the first three digits as one number, and add the three (for each row) together, i.e., it should print 145 (123+12+10).

Your code:

for(i=0;i<3;i++)
{
    readob>>a[i];
    total+=(int)a[i];
}

is not correct, but almost. First of all, you don't need a cast to int, because char is already an integral type. Second, you don't actually need an array of chars, because you use it only temporarily inside the loop. And last but not least, the character (digit) that you read from the stream is a string character, i.e., a byte value that ends up translating into a printable digit (see the ascii table of one example of the encoding used). In other words, a char that is printed as "2" will actually have a numerical value (when treated as an integer) of 50 (if ascii is used, but it could be something else). So, your additions of 1+2+3 is not going to give 6, but probably 150 (49+50+51). To convert a char digit into an integer number, you can just do c - '0' because the digits are always (I think) in order in the encodings (ascii, UTF8, etc.). So, with those things in mind, your code should be:

for(int i = 0; i < 3; ++i)
{
    char c;
    readob >> c;
    total …
Learner010 commented: very helpful +4
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Generally, before being a great reverse engineer you have to be a great engineer. So, I would advise that you start by trying to become a good programmer before you even consider going down this road, which is the wrong road, btw. For any experienced programmer, there is no real mystery about how these software cracks are made, it's pretty straight forward, such as in Hiroshe's example. Making keygens is more of a matter of how naive the verification function is.

There are some tools that people use, but they are probably not what you expect, i.e., they are not "automatic" cracking software. They are tools like disassemblers and similar tools like in-memory bytecode inspectors. Either way, you end up dealing with machine code or bytecode (almost like machine code), which you then have to comb through to find an opportunity to circumvent the security.

This isn't rocket science, just a lot of patience and bad intentions.

And to that point. The rules of this forum do not permit the discussion or promotion of illegal activities. We cannot condone such activities and I don't expect anyone will (or should) give you any precise instructions on how to crack software. I think that if anyone would go too far beyond the kind of vague explanations I just gave, I, as a moderator, might have to delete that post (and possibly issue an infraction against the rules of this forum site).

I am bit confused where this question should be placed.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

@sami9356: We don't appreciate it much when people simply copy-paste their assignment questions to this forum. You are unlikely to simply get an answer. The answer to your question can be found within some explanations of how to correctly and efficiently implement an assignment operator, here are a few tutorials on that: here, here and here. The two problems to check for are mentioned in all three articles, if you read them, you will find your answer.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I wasn't born yesterday, and I recognize that little game you're playing by constantly diverting and deflecting, trying to trigger more responses and frustrations. I believe the colloquial name for that is "trolling". The answers to your questions and concerns have been pretty clear and comprehensive at this point. I see no reason to continue elaborating, and unless you provide more substantive explanations of your (mis)understandings, I would advise others not to waste any more time on this guessing game either.

If you are truly genuine about your misunderstanding of this situation, you are doing a poor job at communicating that. I advise you to provide a clear and comprehensive explanation of your concern or question. Nobody wants to keep guessing what you mean or want to know, without you clearly expressing it.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Here is where the logic is wrong:

"since the default is only and only executed when no case matches"

That's not true. If no other case matches, then the switch will jump to the default case. It's an "if-then" rule, not a "if-and-only-if" rule. The fact that no other case matches is sufficient to get the default case executed (jumped to), but it is not necessary. In other words, it's a one-way rule:

  • "If no case matched, then execute default": true
  • "If execute default, then no case matched": false

So, with that faulty logic statement removed, your problems vanish.

So, to your question, "Why???", well, the reason is that if you understand the rule correctly, the compiler does exactly what it is required to do.

By the way, this is not a problem with C++, but rather with C (or one of its earlier predecessor). I would expect that all programming languages derived from C (which are essentially all mainstream programming languages used today) have the exact same behavior for switch-statements. So, that's another answer to the infamous "Why???" question: it's just traditionally always been so, since the dawn of programming (i.e., when Ritchie / Thompson created C). AFAIK, only languages derived from Pascal (which are nearly extinct now) have the behavior that you are describing.

Furthermore, there is a technical reason "Why???" the switch statement has this behavior. Basically, a switch statement is a series of GOTO statements with case-labels. GOTOs and labels are the most …

TrustyTony commented: Right! +12
ddanbe commented: Deep knowledge! +15
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The mechanism for calling virtual function is, technically-speaking, unspecified or implementation-defined, meaning that there is not actual guarantee about how it is actually done. However, it is almost always accomplished the same way, via a virtual table. So, for each class that defines or inherits virtual functions, the compiler will generate a table (array) of function pointers, each pointing to the appropriate function.

The table is laid out such that each specific function has its specific place (index) in that table. And the placement of the functions is essentially hierarchical for all the inheriting classes. So, let's say you have classes A, B, and C, where C derives from B, which derives from A. Say that A has a virtual destructor (as all base classes should) and a virtual function called "foo". Then, B has a virtual function called "bar" and C has one called "foobar". Then, the virtual table for class A will have (destructor, "foo") in it, then the table for class B will have (destructor, "foo", "bar") in it (in that order), and finally, C will have (destructor, "foo", "bar", "foobar"). Because of this hierarchy, the virtual table of C "looks like" a virtual table for class A, if you only look at the first two entries. That's how that works.

Finally, whenever you create an object, there is a pointer within that object (which you cannot see, because the compiler generates it for you), and that pointer points to the virtual table of the most-derived class …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The first option is certainly good. I mean, all three options are good. But I think that if you consider going for the first option, you should seriously consider going for the 3rd option because having 2GB more RAM, 500GB more HDD, and having a twice better graphics card, and all that for an additional 4.5kINR is a real bargain, in my opinion. Only one of those upgrades by itself is worth about the extra 4kINR, all three is a bargain.

Thirdly, is there any option to switch off graphic card?

Maybe. I think this depends on the chipset (motherboard) and the graphics card itself, and you might have driver issues (especially in Linux, you might not be able to turn it off). But I'm not very knowledgeable on these things.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I think it's perfect. Pretty much. All it's missing is fitting in a hammer and sickle somewhere on the UI. ;)

diafol commented: heh +0
blackmiau commented: I call dibs on the sickle! +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

when the question asker donates money to DaniWeb, we put that money into a pot, and it eventually gets handed out to people who ultimately provide the help

Wow, that sounds exactly like Socialism. Cool. Thanks Comrade Dani. ;)

It's a complex algorithm and so many factors go into it.

Yeah, I'm sure it is.... (rolleyes).. I'm sure you just made up a simple little equation that combines a few metrics together.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The second option seems pretty good. I think that you will be perfectly happy with integrated graphics, like Intel HD Graphics that come with their boards. The only thing that is a little weak with option 2 is the amount of RAM, I think that 4GB of RAM is a bit low (remember, with integrated graphics, the graphics work off of the main RAM memory, so, 4GB of RAM is really means that about 3 to 3.5 GB will be used for main tasks and 0.5 to 1 GB for graphics). If that option can be customized to include more RAM (like 6 or 8 GB) and still be within your price range, then that could be preferrable. An additional 4GB of RAM should cost in the range of a few thousand INRs.

I also do not have dedicated graphics for my laptop, and I'm perfectly happy with it. I think that dedicated graphics card is really just useful for serious gaming. The integrated graphics from Intel are very good and fast and will work very well for most purposes. It's only if you want to play the most recent games at the highest graphics quality settings that you will hit performance problems, but they will work fine on mid-level settings, and for everything else (not games) you will not notice any difference at all. And integrated graphics also have the advantage of being better integrated with the rest of the hardware, have less driver issues, make the laptop smaller …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Beef Jerky!

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Would that qualify as DSEL

No, it just looks like normal C++ code. If you had this expression:

a.set("3-(5-(7+1))^2*(-5)+13");
a.rpn();

but if instead, it looked like this:

3-(5-(7+1))^2*(-5)+13;

or nearly so, and be compilable by a C++ compiler, and with the exact same output as your original version, then it would be a DSEL. In this case, it wouldn't really be possible given that it uses primitive types, which can't have overloaded operators.

would I have to use operator overloading

I would say that a DSEL does not absolutely require using operator overloading, but it very often does require it. It always depends on the domain, and what kind of pseudo-language you want to create. Operator overloading is just a very powerful way to create a new language within the host language.

The real litmus test for DSELs is "does it look like code of the host language?". If the answer is no, then it is a DSEL. For example, here is a Boost.Spirit example (parser for a comma-separated list of floating-point values):

double_ >> *(char_(',') >> double_)

Does that look like C++? No. It's a DSEL, i.e., a completely different language (still compilable C++ code, if you include the right headers).

Since java doesn't support operator overloading does that not disqualify it from being able to create a DSEL?

I was just listing general-purpose languages. But yeah, Java can't support DSELs, because it just doesn't have the …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Most game engines that commercial games use are proprietary, meaning that they were developed by a game company who now owns it and licenses it to other companies (or just uses it for its own in-house games). If company A wants to use a game engine developed by company B, then they would contact them, and strike a licensing deal which would include (I'm guessing) an initial payment to get the engine and be able to work on it, maybe some recurring payment for technical support during development (so that people at company A can get help from people at company B), and then, some royalty on the final product (e.g., 5% of sales of the game goes to the game engine company). Of course, when a game company develops its own game engine, then it doesn't have to worry about any of this.

That said, there are some open-source game engines out there, and also some that are dual-license, meaning that you can download it and develop games with it for free, but if you want to sell a game you developed with it, you must negociate a commercial licensing deal (e.g., a royalty on your sales). One nice and popular open-source game engine (well, only the main part, which is graphics rendering) is Ogre3D. Another popular option is Unity, which is a kind of dual-licensing engine (but it's more closed-source, and more of a share-ware thing). Gamedev is a great resource for this too, …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

DSEL: Domain-Specific Embedded Language. It's when you create an informal domain-specific language (DSL) inside of a general-purpose language (e.g., Java, C++, D, etc..). This generally involves overloading operators and playing special tricks with the semantics of the objects and classes, which, together, completely redefine the normal semantics of the code and turns it into something more appropriate for the targeted domain, i.e., a domain-specific embedded language.

In the C++ standard library, a small example of that is the IO-stream library, where right and left shift operators take on the semantics of input from and output to a stream, and free functions like std::endl(), std::flush(), or std::setw() now take on the semantic of instructions pushed onto streams, as opposed to things that you call (in the imperative sense).

There are also many other C++ libraries that construct their own little world that looks very alien from normal C++ code, or have a lot of fancy mechanics that do a lot more than meets the eye. For example, a well-known domain-specific language for doing linear algebra and numerical analysis in general is Matlab, well, there are many C++ libraries (including mine to some extent, or also Eigen, or Blitz++) that reproduce a nearly identical language syntax, but as straight C++ (with lots of fancy tricks). Another example is Boost.Spirit, which is a DSEL for writing parsers (and related things) with near-EBNF syntax directly as C++ code (no need to use so-called "compiler-compiler" tools like YACC).

In order to be able …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The one thing that has me confused is that I thought W is 1 if it's a vertex and 0 when it's a vector.

That's true for vectors in world coordinates and throughout the modelview transformation(s). However, at the last step (projection), they use the fourth component in this special way to be able to compute those normalized screen coordinates. And the projection matrix only deals with vertices anyways (no vectors), because the point of that matrix is to transform a position (vertex) from a "normal" 3D orthogonal coordinate system into the special projected coordinate system, i.e., the "clip" coordinates, which are then normalized using the fourth component.

I should just be able to feed this into opengl as the projection matrix (since this is gluperspective(...))?

Yes. The projection matrix in OpenGL is exactly that, maps from world coord. to clip coord., and the division by the fourth component will be done by OpenGL internally, after the projection matrix has been applied. So, you projection matrix should be good to use directly (e.g., with glMatrixMode(GL_PROJECTION); glLoadMatrixf(mProjection->getData());).

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

shouldn't the depth be between 0 and 1 (if within the frustum)?

The projection matrix takes the vector to the so-called "clip" coordinates, not the screen coordinates. See the last section of this complete explanation.

The depth value will only be between 0 and 1 once you have done the division by the 4th component of the vector. In other words, you start with the "world" vector:

Vm = | Xm |
     | Ym |
     | Zm |
     | 1.0|

which you then multiply by PxM, to get the "clip" vector "Vc" as this:

Vc = P x M x Vm
Vc = | Xc |
     | Yc |
     | Zc |
     | Wc |

And then, you can get the "screen" vector (where x and y are between -1 and 1, and z is from 0 to 1) as so:

Vs = | Xc / Wc |
     | Yc / Wc |
     | Zc / Wc |

You can recover Vm from Vc by the simple (P x M)^-1 transformation, but recovering Vm from Vs is a bit harder (non-linear), but not impossible either, of course.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

OOP is a programming paradigm, which is to say, it's a way to reason about the code, i.e., the way you see / understand the code. It's a kind of philosophy, if you want. In very general terms, it's about looking at the overall application as a collection of objects, each belonging to a certain class that can play certain roles in the software, that, together, provide all the functionality that make the application work. Then, there are a number of practical patterns and abstract concepts that are attached to this paradigm, such as inheritance / polymorphism, encapsulation, abstraction, design by contract, etc...

Some languages have these practical patterns and abstract concepts more deeply ingrained in their language rules than others. I wouldn't really say that any language is really "pure" OOP, because programming paradigms are, first and foremost, about what goes on in the programmer's head, not in the code. It's also important to understand that programming paradigms are not mutually exclusive and don't have clear dividing lines.

You can write in an object-oriented way in C just as well as you can write in a procedural way in Java, and both of those things are actually very common. Like many other "philosophies", doing "pure OOP" software is something that only academics can afford to do. And in practice, no language can force you to remain "pure" in whatever paradigm is favored by that language, because the language doesn't control how you think, it only influences how you implement …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you are computing Vs from Vm, and then computing Vm back from Vs, and the two don't match, then it means that your matrix inversions are not correct. Regardless of whether you defined your matrices correctly with regard to getting a correct modelview + projection transformation, the transformation back and forth should always work if you inverted the matrices correctly.

To verify that you inverted your matrices correctly, just try to do (P x M)^-1 x P x M which should result in the identity matrix (all zero, expect all 1 on diagonal). If that's not the case, then show us your matrix inversion code and we might be able to point out the problem.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I'm not sure I understand what you mean. I guess you are asking about how to make a game that has the same level of graphics quality from a 10-year-old game, but on today's hardware?

It is certainly possible. To understand this, you have to know that there are two things that make newer games have better graphics: more advanced features on graphics cards and higher detail for the artwork (textures, 3D models, etc.). The fact that there are more advanced features available on modern graphics card does not, in general, prevent you from not using them. Most (if not all) of the original basic features they had available in 2001 are still available today, it's just that people don't use them as much because the better options are now available on all modern computers and run fast enough. It's certainly possible to run the same old basic cheap-looking features they used to be limited to a decade ago. When I talk about features like that, I mean things like: pixel shaders (current) vs. fixed rendering pipeline (old); quadratic / tri-linear / anti-aliased texture filtering (current) vs. linear / near texture filtering (old); multi-texturing (current) vs. single textures (old); and so on.

As for the artwork, there is no problem is using lower resolution textures, coarser models, more shallow scenery, etc.. The only reason why higher quality artwork is used right now is because the graphics cards have enough memory to deal with them, when they previously couldn't.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

28.68$ woot woot!

I'm gonna wait for the "Cash out as Daniweb swag" button.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Moschops is mostly correct.

There are standard guarantees about the memory layout of the data members of a class, but it has to meet a number of requirements / restrictions. In standard terminology, this kind of class is called a standard-layout class, and it has the following requirements:

  1. It has no virtual functions
  2. It has no virtual base classes
  3. All its non-static data members have the same access control (public, private, protected)
  4. All its non-static data members, including any in its base classes, are in the same one class in the hierarchy
  5. The above rules also apply to all the base classes and to all non-static data members in the class hierarchy
  6. It has no base classes of the same type as the first defined non-static data member

If your class qualifies to the above requirements, then it can be considered a standard-layout class and it means that the data members will appear in memory in the same order as they appear in the class definition (and even when it's not standard layout, that is usually the case too, at least on all compilers AFAIK), and there will be nothing else in the class except for those data members. In other words, the class will appear, in memory, to be exactly the same as a C struct (and that's the whole purpose of this thing, to make those C++ classes binary-compatible with C structs).

That said, there can be padding, as Moschops points out. This is also true in …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The first thing to try is the install additional drivers. To do this, open up the Software & Updates tool from the Dash and click the Additional Drivers tab. Follow the on-screen prompts to check for, then enable, any proprietary drivers (not open source) available for your system. This is because the default (open-source) drivers for graphics cards are often not good enough, or misbehave, and you'll have a better chance with the proprietary drivers (that come from the graphics card manufacturer).

BTW, what is your graphics card?

Have you done any kind of display reconfiguring? (like dual displays, resolution, etc.) It's possible that they broke the display configuration and you should revert them to the simplest thing that works, then install proprietary drivers, and then reconfigure the display as you wish, but make sure you follow the correct method for your graphics card + proprietary driver.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Contrary to popular belief, the debate between Christopher Columbus and his peers was not about whether the Earth was round or flat, they all agreed it was round, but they disagreed on the size of it and the size of the Eurasia. The consensus on both sizes of the Earth was pretty accurate to the real figures, and since they didn't know about the existence of America, they rightly figured that the trip across to Asia would be impossible. Imagine if there was no America, crossing the Altantic, and then, the Pacific in one trip, with no re-supply point, would be crazy, especially with the means of the time. Columbus used the wrong measurement units and thought the Earth was 75% smaller, and on top of that, he had re-calculated, very wrongly so, the size of Eurasia to be much larger than it really is. That's why he thought the trip was possible because it put the east coast of Asia just a bit east of the real american east coast, and it's also why he naturally assumed he had landed in India when he reached the Caribbeans, because with his calculations, that's almost exactly where he expected to be. This whole myth of "everyone believed the Earth was flat until Columbus proved otherwise" comes from a fictional biography by Washington Irving.

And on a related topic, contrary to popular belief, people stopped believing in a flat Earth a long time ago (except for a resurgence by some …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

If you do a hard shutdown (by holding the power button of the computer until it shuts down), does it still turn back on by itself?

If it does, then it must be some sort of hardware issue. Maybe the power-button was wired up wrong?

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

100,000 is too much for Indians.

Yeah, well that was a few years ago. Today, at least here Canada, you can get a better laptop than mine for about 40,000 rupees (700CAD), I just checked, as an example, an Acer 15.4 inch screen, i5-4200, 1TB HDD, 10GB RAM, Win8.1, Radeon R7 M265 (2GB) graphics, and it's 700CAD. I obviously don't know how prices are in India for laptops, but I imagine it's of the same order of magnitude.

What are you talking about in "Quality"?

I don't know. When it comes to laptops, I think that the construction quality is very important, because you carry it around and knock it around a bit, and it's important that it's well-built. I always found Lenovos and Sonys to seem a bit wonky in their construction. This is also why I like both Acer and Toshiba. And I'm a bit on the fence with Dell laptops. But this is, by no means, a professional opinion. When it comes to internal components, all companies have the same stuff, mostly, except that some try to cut cost by putting in cheap components or shabby assembly.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

When it comes to laptops, I have always found that Toshiba or Acer are pretty safe bets. I think HP is OK too. I've never trusted the quality of Lenovo, Sony or Dell, for laptops. That's just my 2 cents.

Btw, your specs are very close to what I have in my current Acer laptop (i7 2nd gen, 1.5 TB HDD, 8 GB RAM, 1 GB graphics, Win7), that I bought a couple of years back (for about the equivalent of 100,000 rupees), and I'm very happy with it.

Gribouillis commented: +1 for acer aspire with i7 +0
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are several Linux distributions that focus on security of various kinds. I'm not sure exactly which would be the most appropriate for you. They take different focuses. Some aim at anonymity (e.g., via Tor or I2P, like in Tails Linux). Some aim at preserving the integrity of the system, like Immunix. Some aim at running secure servers, like Fedora, CentOS and RHEL.

And when it comes to securing a system, ironically, the NSA can be a useful source of information. Using SELinux-enabled systems is probably a good idea. You might be paranoid about NSA backdoors, maybe justifiably so, but I think SELinux largely predates the start of NSA's criminal activities, and it's mostly a "way to do things" (protocol) as opposed to an actual implementation (AFAIK), so, the implementations of it are probably trustworthy.

It sounds like what you want is mainly to be able to store important information. For that purpose, you need either full disk encryption (e.g., truecrypt or dm-crypt) or file-system encryption (e.g., EncFS or eCryptFS), or both. Personally, I'm not convinced that full disk encryption is really that good because if someone accesses your system (physically or remotely), then having user or root access to the system implies being able to read / write data on the encrypted drive, at least, that's how I understand it. I guess the point is that securely storing data, to me, implies that the data is never left unencrypted (or readable) for any period of …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are mainly two easy ways to "parallelize" the operations in a for-loop like this.

One option is to use SSE(2-3) instruction sets. SSE instructions sets can basically perform multiple (4) floating point operations at once (in one instruction). This is something that the compiler can do automatically. If you are using GCC (or ICC), these are the appropriate compilation flags:

-mfpmath=sse -Ofast -march=native -funroll-loops

If you add those to your compilation command, the compiler should optimize more heavily for your current architecture (your computer), using SSE instructions, and unrolling for-loops to further optimize things.

Another easy option for parallelizing code is to use OpenMP. OpenMP allows you to tell the compiler to create multiple threads, each executing one chunk of the overall for-loop, all in parallel. It requires a few bits of mark-ups on your code, but it's easy. Here is a parallel for-loop that does a logarithm on an array using 4 threads:

void do_log_for_loop_omp_sse(float* arr, int n) {
  #pragma omp parallel num_threads(4)
  {
    #pragma omp for
    for(int i = 0; i < n; ++i) 
      arr[i] = std::log(arr[i]);
  };
};

When you compile code that uses openMP on GCC, you need to provide the command-line option -fopenmp to enable this.

Also note that you can easily combine the two methods by using openmp in your code, and telling the compiler to use SSE instructions.

Just for fun, I wrote a program that measures the time for all these four methods (for 3000 …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

There are two different matters at play here. There is the mathematical concepts of column vectors versus row vectors. And there is the memory layout issue of storing matrices in column-major or row-major. These are two completely separate issues. In mathematics, we almost exclusively use column vectors. In mathematics, memory layouts do not exist, since mathematics is abstract, and memory layouts are an implementation detail / choice when you put the abstract math into real code.

So, to make things clear, in mathematics, we have this:

|X X X T|   |X|
|X X X T| x |Y|  =  M x V
|X X X T|   |Z|
|0 0 0 1|   |1|

which performs the rotation and translation of the 3D vector (X,Y,Z). If you transpose the entire expression, you get an equivalent expression:

(M x V)^T = V^T x M^T = | X Y Z 1| x |X X X 0|
                                     |X X X 0|
                                     |X X X 0|
                                     |T T T 1|

(where all the X's are transposed too). The above is how the mathematical conventions of row-vectors and column-vectors relate to each other. In other words, using row-vectors just means that you transpose everything. But like I said, in mathematics, we use, almost exclusively, column-vectors. And I just noticed that Direct3D documentation uses row-vectors... (sigh).. (rolleyes)..

In OpenGL, the matrices are stored in column-major ordering, meaning that the memory index of each element of the matrix is as follows:

| 0 …
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Are you sure that you are placing the studentName.txt file in the same folder as the .exe file that is generated by the compiler? I suspect that the file you think it's using is not the one that it is actually using. When you load "studentName.txt", it will open the studentName.txt file in the current directory of the running program (the .exe file), or where the IDE runs the program from. Most IDEs (like Visual Studio or CodeBlocks) have special folders within your project's top-level folder where it generates the executable and runs it from, and it may not be the one you think. Make sure to locate the .exe file, and place the studentName.txt file in that directory.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

The machine do learns with experience but, How? I little bit understand that it learns via the artificial neural network combinations it create in respond to input.

This is called reinforcement learning. This is completely independent of whether you use neural networks or not. Neural networks to have some features that make them suitable for that, e.g., back-propagation learning that can be adapted for reinforcement learning. But reinforcement learning generalizes beyond any particular method you choose as the input-output mapping.

The point is that you insert your input-output mapping into some situation (usually simulated) where the "agent" gets some input about what's going on in the environment (e.g., position of the ball, position of opposing paddle, etc.) and outputs some "action" on the environment (e.g., position of it's own paddle). And then, the training is done by playing many many games (many trials, many simulations, etc..) and at every "game", a reward is given based on how successful the game was for the agent (e.g., 1: win, 0: lose). So, that's how the problem is set up.

At that point, you pick some method to compute the output (moving the paddle) from the given input (position of ball), and you make sure that this method has sufficient complexity and adaptable parameters to be able to re-create complex "emerging" behaviors or strategies. One option for that is a neural network, but it is far from being the only option. Then, you have to find a …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree with Dani and carl.potak in that a really good fit might be the middle group between client / management and the professionals (programmers / engineers). This could be anywhere from project manager to sales representative. You might not see yourself doing programming (software engineering, etc.) work, but with a CS degree you do have a good knowledge (I hope) of the terminology and of the issues that programmers use or have to deal with. This, by itself, can be a valuable skill.

Very often, companies have a difficult time finding good people to act as project managers, for example, because it requires good understanding of the technical issues, but it also does not involve much hands-on technical work, and most people that have the former don't like that latter, and vice versa. I just know that in engineering disciplines, being the "project manager" is always a matter of drawing the short straw, i.e., those who do love the hands-on technical work hate being stuck doing the hands-off project management work, but it's the lesser of two evils, the second evil being having a project manager who lacks the understanding of technical issues.

Being a sales representative (or similar, "deal with clients" work) is also another good option. Many technical companies have technical clients (not your average joe, but other technical companies), and so, the clients are professionals (programmers, engineers, etc.). And as a engineer, for example, I hate dealing with representatives that don't understand anything about the technical …

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

"Hoe" is the correct spelling for the "immoral pleasure seeker". But it's a contraction of the word "whore", of course.

I have no idea where AD got his "Ho" and "Hore" spellings from... maybe the letter "W" and silent "E" didn't exist in those Ancient times from when that Dragon hatched.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

What is really important is to resize by an amount that is proportional to the current amout in the array. Typical factors vary from 1.25 to 2. The reason why it is important to use a growth factor like that is because it makes the cost of allocation (and copying) have an amortized constant cost.

Here is how it works. Let's say the factor is 2, and you currently have N (valid) elements just after a resizing to a total capacity of 2N. This means that you can add N elements more to the array before you have to resize again, at which point you will have to copy the 2N valid elements from the old memory to the newly allocated memory. That copy will have a O(2N) cost, but you did it once after N additions to the array, meaning you did it with 1/N frequency, and so, the average cost is O(2N * 1/N) = O(2), which is what we call an amortized constant cost. If the factor was 1.5, by the same calculation (with frequency 1/(0.5*N)), you get an amortized cost of O(1.5*N * 1/(0.5*N)) = O(3). So, for any factor F, the cost is O(F/(F-1)). That's where you have a trade-off, with a larger growth factor the amortized cost is lower (limit is 1), but the excess memory is greater (on average, the unused capacity is N*(F-1)/2). A typical choice is 1.5, or around that. This is the standard why to handle a dynamic-sized …

StuXYZ commented: great post +9
mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

So, if the markdown is rendered by a stock version of PHPMarkdown, and that the mishandling of the code blocks in lists and quotes comes from PHPMarkdown, then it would seem that the issue should be reported there.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Did you update your version of PHPMarkdown? Because I spotted this on the developer's version history:

Extra 1.2.5 (8 Jan 2012):
Fixed an issue preventing fenced code blocks indented inside lists items and elsewhere from being interpreted correctly.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

You don't strike me as someone who accepts defeat so easily. ;)

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

I agree that this issue is annoying.

There are similar issues with code blocks inside of quotes.

I think the markdown renderer could use a bit of re-working.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

An error like "too many levels of symbolic links" means that you must have a recursive (i.e., circular) set of symlinks. In other words, you must have created a symlink (maybe for .bashrc or /bin/bash) which directly or indirectly refers back to itself. When that happens, the OS follows the symbolic links which go round and round in a circle, until the OS reaches a limit on how far it will keep doing that. The main reason for this limit to exist is for this particular situation (circular symlinks), because it could cause the kernel to hang indefinitely (infinite loop) if there was no limit.

Now that you understand what the problem is, there is a good chance you already know what you did wrong.

If not, try to investigate what symlinks you have created (intentionally or not) surrounding .bashrc and /bin/bash (two things that you should not mess with!).

If you have trouble reaching a usable terminal because of this, here are some tricks, off the top of my head:

1) Boot into the root shell. This shouldn't use your user account's .bashrc file. The disadvantage is that this environment kind of crude (no GUI at all).
2) You can install an alternative shell and use that one instead of bash. Most of them are nearly identical to bash, but shouldn't use the .bashrc file or the /bin/bash program.
3) Create a new user account (throught the Ubuntu system setting menus) with sudo-privileges. Log-in with …