arkoenig 340 Practically a Master Poster

Just for fun... What happens if you add the semicolon that you forgot after the } in foo.h?

arkoenig 340 Practically a Master Poster

Are you saying that you'd like a programmer to be able to write 14 and have it mean, say, 42? That's what changing a literal constant would mean.

arkoenig 340 Practically a Master Poster

Please show us the work you've done so far, and explain where you're stuck.

arkoenig 340 Practically a Master Poster

This is a Java program; I think you will be more likely to get help with it if you post it in a Java forum rather than a C++ forum.

arkoenig 340 Practically a Master Poster

Thanks for the reply.

Now considering the source code in your post, what will happen if I execute something like:

Bar b(12)

Which constructor will be called in this case? Will this give an error because the class Bar has no constructor of the form Bar(int) or will it instead execute the constructor Foo(int) without any error?

It will give an error because Bar has no constructor of the form Bar(int).


Also, will it be fine if I edit line#10 of your source and make it something like:

Bar(): Foo(int) { }

in order to generalize the Foo part?

No. A constructor initializer expects an expression, and int is not an expression.

arkoenig 340 Practically a Master Poster

Every derived-class object contains a base-class object as part of it. By implication, constructing a derived-class object involves constructing the base-class object as well.

Therefore, when you construct an object of type Bar, part of that construction process is to construct an object of type Foo.

The Foo constructor is not inherited in the normal sense, because if it were inherited, you would be able to override it--and you can't. One of Foo's constructors is always executed as part of constructing a Bar. All you can do is choose which constructor to execute, and which arguments to pass to it, by using a constructor initializer:

class Foo {
public:
    Foo();
    Foo(int);
    // ... (perhaps other constructors)
};

class Bar: public Foo {
public:
    Bar(): Foo(42) { }
    // ...
};

Now, if you execute

Bar b;

that will execute Bar's default constructor. That constructor has a constructor-initializer of Foo(42), which means that it will pass an argument of 42 to Foo's constructor when it executes. The fact that that argument exists, and the fact that it has type int, will select the Foo constructor that takes an int argument.

Inheritance does not come into play here.

arkoenig 340 Practically a Master Poster

AFAIK, the main advantage of nested functions in languages that have them is to provide support for closures. http://en.wikipedia.org/wiki/Closure_%28computer_science%29

Yes.

C++ already had function objects, which look like functions but are really first class values, they can be stored, mutated, passed as arguments and returned and bound to variable names. And therefore C++ already had a mechanism for supporting closures - several libraries eg. Boost::bind, Boost::spirit, Boost::phoenix do exploit closures extensively.

The decision not to implement member functions came long before Boost, or even STL, existed. I think (but do not remember for sure) that function objects did exist, and their existence contributed to the discussion. Essentially, one camp thought that function objects could easily substitute for nested functions; another thought that programmers would have difficulty writing such substitutions and getting them right.

In particular, one difficulty with using function objects as closures is that each time you define a function object, it is a new type. So to use them in any generality, you may find templates creeping in where they're not really needed.

In C, gcc has had an extension which provides limited support for nested functions and closures for quite some time now. http://gcc.gnu.org/onlinedocs/gcc/Nested-Functions.html One can only call these functions (with closures) as long as the containing stack frame is alive:
GCC implements nested functions via a simple trampoline bounce; the nested function does not store any state information about closures, instead they are picked up directly off the stack frame.

arkoenig 340 Practically a Master Poster

The expression pick != 'r' || pick != 'g' is always true.

The reason is that either pick != 'r' is true, in which case the whole expression is true, or pick != 'r' is false, in which case pick must be equal to 'r'. And if that is the case, then pick != 'g' must be true, so the whole expression is still true.

arkoenig 340 Practically a Master Poster

Also, the form of your example:

String operator+(const String &s)

suggests that operator+ is a member function. The name of the class, String , suggests that operator+ is probably being used for concatenation. The fact that the function has one argument rather than two suggests that it is probably a member function rather than a standalone function. Using a member function for concatenation is a bad idea because it requires the left operand of a concatenation to be a String rather than a type that can be converted to String .

Moreover, the lack of const in the member-function declaration (if, indeed it is a member function) means that you cannot use a const String object as the left operand.

That's two elementary programming errors in that one example. If you saw that example in a textbook, it may be a good idea to consider a different textbook.

mrnutty commented: Good points +5
arkoenig 340 Practically a Master Poster

Assuming that the actual question meant to ask whether each element of one array is equal to the corresponding element of the other, the first observation is that in order for them to be equal, they must have the same number of elements, which I'll call n . Then

std::equal(array1, array1+n, array2)

returns true if the two arrays are equal and false otherwise.

If that's not the question that the original poster had in mind, clarification would be appreciated.

arkoenig 340 Practically a Master Poster

The reason is that it is difficult to implement nested functions in a way that maintains compatibility with C and other compilers. For example:

extern void f(void (*)());

int main()
{
    int x;
    void g() { x = 42; }
    f(g);
}

In order for this program to work, f, which is compiled separately, has to have access to information that allows it to determine the location of the local variable x inside main.

The typical implementation technique for solving this problem is to say that function pointers are two words long: One word contains the address of the function's code; the other contains the address of the most recently instantiated stack frame of the statically enclosing function.

Trouble now ensues if the function f is in a language whose compiler does not know about this convention.

It is possible to sidestep the compatibility problem by saying, essentially, that global functions and netsted functions have pointers that have two different types. Several people on the C++ committee actually sketched out an implementation strategy that would have made it possible to do so. However, after protracted discussion, the committee ultimately decided that the added implementation complexity was not worth the benefit. If I remember correctly, it was a fairly close decision.

vijayan121 commented: that was a lucid explanation. thanks. +6
arkoenig 340 Practically a Master Poster

What is to stop someone simply 'going backwards' in terms of the encryption process, for example if I encrypt something using a public key why can't I simply decrypt it using the same public key?

The simplest answer is that if it were possible to do that, what you would have would not be a public-key encryption algorithm.

For example, imagine a program named crypt, to which you give a key and a message:

encrypted_message = crypt(key, unencrypted_message);

There are certainly encryption programs in which it is possible to get the original message back by using the same key:

unencrypted_message = crypt(key, encrypted_message);

but those are not public-key encryption programs. With a public-key encryption program, you get two keys, which we can call public_key and private_key, which you can use this way:

encrypted_message = pkencrypt(public_key, unencrypted_message);
unencrypted_message = pkdecrypt(private_key, encrypted_message);

For that matter, you can also use them backward:

alternative_encrypted_message = pkencrypt(private_key, unencrypted_message);
unencrypted_message = pkdecrypt(public_key, encrypted_message);

But if you encrypt a message with a key, and then try to decrypt it again with the same key, you get garbage.

The point, then, is that you get a pair of matched keys. You keep one of them to yourself; you publish the other. If I want to send you a secret message, I encrypt it with your public key, which enables you to decrypt it with your private key. Only someone who knows your private key can decrypt it.

Similarly, if you want …

arkoenig 340 Practically a Master Poster

I beg to differ with both of the previous posters.

Most computers these days (including, so far as I know, all currently manufactured Intel processors) use IEEE floating point, which requires that the results of floating-point addition, subtraction, multiplication, division, and square root be deterministic. In particular, the IEEE standard requires that the result of any of these operations be bit-for-bit identical with what the result would be if the operation were conducted in infinite precision and then rounded (according to the current rounding mode) to the given precision.

The standard does permit intermediate results to be computed in greater precision than the variables involved in the computation, but that liberty does not affect the program shown here.

The real reason that the output from this program is surprising is simpler: The type of the literal 0.2 is double, not float. Moreover, the value 0.2 cannot be precisely represented in an IEEE floating-point number.

Therefore, when we write

float x = 0.2;
float y = x - 0.2;

the value of x - 0.2 is computed by converting x to double, subtracting the double representation of 0.2 from it, and converting the result back to float. The result is to make y equal to the difference between the float representation of 0.2 and the double representation.

If you were to change the program as follows:

double x = 0.2;
double y = x - 0.2;

then I believe that the IEEE standard would …

arkoenig 340 Practically a Master Poster

Note that the value returned by ip.c_str() persists only until the next time you change the value of ip . So if you modify ip in any way between the time you assign the value to argv[1] and the time you use the value, the effect is undefined.

arkoenig 340 Practically a Master Poster

You should write iter!=fg_sprites.end() rather than iter<fg_sprites.end() because the multiset type offers a bidirectional iterator, not a random-access iterator, and only random-access iterators support <.

(also, it is preferable to write ++iter rather than iter++ because there's no need to copy the value of iter and then throw the copy away without doing anything with it)

arkoenig 340 Practically a Master Poster

Ahh i see, as the next issue i came up with was that the compiler was silent but it didnt work:P

What do you mean "it didn't work"? In the code you posted, you never actually use your Write member, so what didn't your code do that you expected it would do?

arkoenig 340 Practically a Master Poster

Your statement

Write = &WriteLogS;

should be

Write = &Logger::WriteLogS;

and similarly for the other similar statement.

arkoenig 340 Practically a Master Poster

make the functions private? not sure I understand what you mean, if you can give me a code example, thanks

template<typename T> class Stack {

    // ...
private:
    Stack(const Stack&);               // copy constructor
    Stack& operator=(const Stack&);    // copy assignment

public:
    // ...
};

So you declare the copy constructor and assignment operators as private, thus ensuring a compile-time diagnostic message for anyone who tries to use them. Because no one can actually use them, you don't need to define them -- just declare them as shown here.

arkoenig 340 Practically a Master Poster

3) You have no copy constructor or copy-assignment operator. If you try to copy a Stack, the effect is likely to be that you will have two objects that share a common data structure. Destroying one of them will cause mayhem with the other.
I am not using a copy or assignment operator for this right now, this was a start. when I make more complicated code I ll start doing that. Then again I should try doing them for practice, but I do understand that if I wanted to copy on object to another I would need a copy consturctor and if I want to assign the value from one object to the other I would need an assignment operator function.

If you don't want to define the copy or assignment operators, make them private (without defining them otherwise) so that no one tries to use them by mistake.

arkoenig 340 Practically a Master Poster

I'd like to echo Aranarth's comments. In addition:

1) The clear() function appears to delete one element, despite the comment that suggests that it deletes all of them.

2) Therefore, the destructor deletes one element and leaves the rest of them sitting around.

3) You have no copy constructor or copy-assignment operator. If you try to copy a Stack, the effect is likely to be that you will have two objects that share a common data structure. Destroying one of them will cause mayhem with the other.

arkoenig 340 Practically a Master Poster

@StuXYZ.. just a little important correction, the OP should add:

virtual ~baseClass() {}; //notice the ~ for destructor

check also this thread on a very similar topic.

One more little correction:

virtual ~baseClass() {} // no semicolon
arkoenig 340 Practically a Master Poster

You need to make the destructor in the base class as virtual (otherwise the base class destructor will be called directly).

A small, pedantic, correction:

You need to make the destructor in the base class virtual, otherwise the effect is undefined (Most implementations call the base-class destructor without first calling the derived-class destructor, thereby leaving the object's memory in a partially-destroyed state).

arkoenig 340 Practically a Master Poster

Otherwise, we wind up with the original poster writing

myVec.clear();
VecType vert;
myVec.swap(vert);

without understanding why the call to clear is unnecessary, or why the second and third statements could have been written as

VecType().swap(myVec);      // This works...

but not as

myVec.swap(VecType());      // ...but this doesn't.

It occurred to me just now that these remarks don't tell the whole story. Consider:

// Example 1
{
    VecType myVec;

    // Put a bunch of stuff in myVec

    VecType().swap(myVec);      // Clear myVec

    // Put a bunch of stuff in myVec again

    ...
}

// Example 2
{
    VecType myVec;

    // Put a bunch of stuff in myVec

    VecType temp;
    temp.swap(myVec);      // Clear myVec

    // Put a bunch of stuff in myVec again

    ...
}

These two examples are likely to have very different characteristics in terms of how much memory they consume. For the moment, I'll leave it as an exercise to figure out why. When you've figured it out, I think you will see why I say that there is more to this problem of controlling memory allocation than meets the eye, and perhaps you will understand better why I wanted to concentrate on the fundamentals first.

I am reminded of Brian Kernighan's two rules of optimization.

1) Don't do it.

2) [for experts only] Don't do it yet.

arkoenig 340 Practically a Master Poster

Failing to solve the problem by oversimplifying is not the same thing as premature optimization.

Failing to understand a problem thoroughly before trying to solve it is the kind of carelessness that often leads to premature optimization.

Evidently we have different ideas about what is important. Nothing wrong with that--but I would like to suggest that you think twice before assuming that everyone who disagrees with you is wrong.

arkoenig 340 Practically a Master Poster

I still wonder whether

myVec.resize(0)

or consecutive calls to

myVec.pop_back()

might guarantee that the memory will really be released. If the swapping method is faster then the above (given that some of them really release the memory), I should use it. I hope a call myVec.clear() (as I did in the freeFromMemory) is not clashing with swapping.

I assume that you're talking about the memory allocated by the vector , (i.e. the memory occupied, not allocated, by the vector 's elements) not the memory allocated by its elements.
In that case, pop_back will definitely not free that memory, because doing so would require invalidating all iterators to elements of the vector .

Calling resize(0) is permitted to free the vector 's memory, but is not required to do so. For that matter, the swap technique is not required to free the memory either; but it is harder to understand how one might go about implementing the C++ library in a way that causes it not to do so.

By far the most straightforward way of freeing the memoryh associated with an vector is to make the vector a local variable and let it go out of scope when you no longer need it. Of course, that technique may or may not apply to your particular program.

arkoenig 340 Practically a Master Poster

I fail to see why you would advocate a solution that only works for an unconfirmed case (ie. you're exhibiting wishful thinking) and dismiss a solution that works in all cases.

1) Because I don't like premature optimization.

2) Because I think that before trying to explain this particular technique, it is important to be sure that the original poster understands why trying to apply delete to a vector element is unlikely to do what was intended.

So I think the right thing to do is to explain (and to urge the original poster to understand) the issues behind the original question before proceeding. Otherwise, we wind up with the original poster writing

myVec.clear();
VecType vert;
myVec.swap(vert);

without understanding why the call to clear is unnecessary, or why the second and third statements could have been written as

VecType().swap(myVec);      // This works...

but not as

myVec.swap(VecType());      // ...but this doesn't.

My teaching experience has shown me that the best time to explain these intricacies is only after the fundamentals are well understood.

Ego has nothing to do with it.

arkoenig 340 Practically a Master Poster

Here's reality. Even if you clear the elements of the vector, the capacity isn't guaranteed to decrease. The only way to truly free the memory owned by a vector is to force the vector object out of scope.

Yes, but in this particular case, that probably doesn't matter.

The original poster was saying that the individual vector elements consumed large amounts of memory. In almost all cases, class objects that consume large amounts of memory do so by dynamically allocating that memory when they are constructed and freeing it when they are destroyed. To do otherwise requires fixing the amount of memory used during compilation, and has the side effect that every class element uses the same amount of memory, whether or not it needs to do so.

So if the class objects are sensibly designed, almost all of the memory occupied by each class object will be freed when the object is destroyed, and calling clear will work as I described. The extra memory consumed by the vector itself will be trivial in comparison to the memory consumed by its objects.

arkoenig 340 Practically a Master Poster

Suppose you have an object of a type std::vector<VectorRam>. That object contains elements, each one of which is an object of type VectorRam.

If you want to free the memory that those VectorRam objects occupy, there are two ways to do so:

1) Free the vector itself. That is, if the vector is a local variable, allow it to go out of scope.

2) Reduce the number of elements in the vector. Calling myVec.clear() is one way to do that; it reduces the number of elements to zero. If you don't want to get rid of all of the elements, you could call myVec.erase or myVec.resize to get rid of just some of them.

What you cannot do is apply delete to individual elements of the vector. You can use delete only with a pointer to an object that was allocated with new ; in this case you do not even have a pointer, let alone a pointer to dynamically allocated memory.

arkoenig 340 Practically a Master Poster

This is more code than I want to bother reading without a clue as to what I'm looking for. However, the following oddity did jump out at me:

The for statement on line 10 has as its subject the while statement on lines 11 through 19. Is that what you had in mind? If so, why is the code not indented that way? If not, what did you have in mind?

And while we're at it, I should note that the code is a little weird: Two nested loops, each of which has max < N as its condition, and each of which increments max each time through. Which means that when the inner loop terminates, so does the outer one--in which case I don't understand why you're bothering to write a loop at all.

So I am wondering whether perhaps line 10 was really intended to be there, or is left over from an earlier version of the program by mistake.

arkoenig 340 Practically a Master Poster

I tried the following code in my Turbo C++ ..

int j=5;
cout<<++j + ++j + j++;

and got the result 20, as expected.

Your first mistake is having any expectations at all about this statement.

It tries to modify the value of j twice (or more) in the same expression (or, more accurately, between two sequence points), so the result is undefined.

This means that a C++ compiler is permitted to generate code that does anything at all when you try to execute this statement, including deleting all your files and sending hate mail to all your friends.

arkoenig 340 Practically a Master Poster

String literals have type "pointer to const char" and should not (and, in many circumstances) cannot be passed to function arguments of type "pointer to char"

Bad:

extern void foo(char *);

void bar()
{
    foo("Hello");   /* attempted conversion of const char * to char * */
}

Good:

extern void foo(const char *);

void bar()
{
    foo("Hello");   /* OK */
}
arkoenig 340 Practically a Master Poster

How did I come up with it?

If you execute bullet_list.erase(iter) , that invalidates iter . So before executing bullet_list.erase(iter) , it is necessary to compute the new value to be placed into iter , because there will be no opportunity to do so later.

Here's the straightforward way to do it:

std::list<bullet>::iterator temp = iter;
++iter;
bullet_list.erase(temp);

This part of the solution is pretty simple. I guess it's a creative step to realize that

bullet_list.erase(temp++);

does exactly the same thing, and it's hard to explain why I happened to think of it.

It helps to have been using C++ for 25 years.

arkoenig 340 Practically a Master Poster

This code doesn't work:

std::list<bullet> bullet_list;
std::list<bullet>::iterator iter;

bool alive = false;
for (iter = bullet_list.begin(); iter != bullet_list.end(); ++iter) {
	bool isalive = iter->alive;
	if (isalive == true) {
		alive = iter->move(); // move is a function inside the struct "bullet"
	} 
	if (alive == false) {
		int bulletID = iter->id;
		bullet_list.erase(iter);
		dbDeleteObject(bulletID); // deletes object from game
	}
}

The problem is in line 12. Executing bullet_list.erase(iter); causes the element to which iter refers to be deleted from the list. Doing sinvalidates iter. On the next trip through the while loop, iter is invalid, so ++iter has undefined effect.

One way to deal with this problem is to increment iter only when it is known to be valid. The following rewrite is slightly sneaky, but I think it will work:

std::list<bullet> bullet_list;
std::list<bullet>::iterator iter;

bool alive = false;
for (iter = bullet_list.begin(); iter != bullet_list.end(); ) {  // No increment!
	bool isalive = iter->alive;
	if (isalive == true) {
		alive = iter->move(); // move is a function inside the struct "bullet"
	}

	// The following if statement increments iter one way or another,
	// regardless of the value of alive
	if (alive)
		++iter;
	else {
		int bulletID = iter->id;
		bullet_list.erase(iter++);
		dbDeleteObject(bulletID); // deletes object from game
	}
}

The point is that bullet_list.erase(iter++); copies iter , increments it (which causes it to refer to a value that will survive the call to erase , and then erases the element to which iter formerly referred. This …

Yiuca commented: I don't like giving broken code but it sometimes happens, thanks for correcting it. +2
arkoenig 340 Practically a Master Poster

The book recommends against "using namespace std;" because if you use it, and happen to define a name that clashes with a name in the standard library, you wind up with ambiguity problems.

Moreover, because C++ implementations are permitted to extend the standard library by defining their own names (and will surely do so even if they're not permitted), the practical effect of saying "using namespace std;" is that you do not know whether your program will work on a new implementation until you try it.

arkoenig 340 Practically a Master Poster

You are exactly right about the point of the question. The question is there in order to encourage readers to stop thinking about data formats in terms of absolute sizes, and to try to write programs with output that conforms to the data being displayed.

arkoenig 340 Practically a Master Poster

If you're serious about writing C++ programs, rather than writing C programs and using a C++ compiler to compile them, std::fill is a better choice than memset because it's type-safe and works even for class types.

mrnutty commented: Thank you, finally someone pointed out the obvious! +5
arkoenig 340 Practically a Master Poster

The problem is solvable in principle, but it isn't easy. Here's why.

Suppose you're trying to obtain a random number between 0.0 and 1.0, and you want the result to be uniformly distributed. That does not mean you want every possible floating-point number to appear equally likely, because the little tiny numbers close to 0, with large negative exponents, are much more closely spaced together than the bigger numbers near 1.

So if you want uniform distribution, and you also want every floating-point number to be possible as a result, you wind up with a whole bunch of tiny numbers that, because they're so close together, occur extremely rarely--but still have to be able to occur at all.

Figuring out how to do this reliably is far from easy, especially if your integer random-number generator isn't very good (as many of them aren't).

So before diving into this problem, you might want to think a bit about whether it's really what you want.

arkoenig 340 Practically a Master Poster

What you should really do is use std::string instead of char arrays.

arkoenig 340 Practically a Master Poster

Please post a short, complete example that illustrates the problem.

arkoenig 340 Practically a Master Poster

You don't like how C defines integer division, and from that you conclude that it has no advantages over assembly language? Surely you're joking!

arkoenig 340 Practically a Master Poster

ok...
is there any specific reason that file descriptors are not a part of c++??

and..if i want to do something like mmap() in c++, what should i do?

File descriptors are not part of C++ because if they were, how would C++ be implemented on an operating system that does not have them? Ditto mmap().

So if you want to use such facilities, you have to consult the documentation for your particular implementation and find out how, and whether, the implementation has made them available.

arkoenig 340 Practically a Master Poster

But mmap() is not part of C++. For that matter, neither are file descriptors. So what you're really saying is that you want to use information that your particular implementation provides in an implementation-dependent way.

Which means that the answer to your question relies on the details of your implementation.

arkoenig 340 Practically a Master Poster

What would you do with the information if you had it?

arkoenig 340 Practically a Master Poster

The answer is that the program does not work, because the line that assigns a value to c has undefined behavior. The behavior is undefined because it attempts both to fetch and store the value of a single object between sequence points.

Because the program does not work, there is no need to spend time understanding why it works.

arkoenig 340 Practically a Master Poster

I don't understand the assignment. Deleting an element from a doubly linked list is not inherently an iterative operation, which means that there is no obvious way to transform it into a recursive operation.

arkoenig 340 Practically a Master Poster

Yes, how about it? Would you care to try to solve it?

I asked the question because I think that trying to solve it may be educationally useful for pepole who are studying computer science.

arkoenig 340 Practically a Master Poster

I cannot resist pointing out that line 12 is unnecessary in this example, because the only way control can reach line 12 is if avail is found to be equal to new_avail in line 9.

I willl also point out that lines 5 and 8 can be collapsed into

iterator new_avail = std::copy(end, avail, begin);

because std::copy returns an iterator that refers to the position after the last element copied.

Finally, the test

if(begin)

in line 3 doesn't seem to do anything useful, because there is no general rule for what it means to use an iterator as a condition.

As before, I have not tested the code that corresponds to these comments, but hope you find them useful anyway.

arkoenig 340 Practically a Master Poster

You are given a one-dimensional array of signed integers. Find the (contiguous) subarray with the largest sum.

If none of the array elements are negative, the solution is obviously that the subarray is equal to the entire array, so the problem is interesting only if the integers are signed.

There is an obvious solution with run time of order n^3, where n is the number of elements in the array: Generate every subarray, sum each subarray's elements, and keep track of the one with the biggest sum you've seen so far. So the interesting part of the problem is to see if you can do better than order n^3, and how much better you can do.

arkoenig 340 Practically a Master Poster

How much memory are you allowed to use? Are you allowed to reorder the smaller list? Without answers to these questions, I don't see how to answer the original question.

arkoenig 340 Practically a Master Poster

If every element points to a subsequent element, where does the last element point? By definition, no element can be subsequent to the last one.

So the problem as stated makes no sense. Once we have figured out exactly what the problem is, it may be possible to come up with a solution.