sepp2k 378 Practically a Master Poster

I don't know of any built in pow function in Haskell

Why don't you count ^ as a pow function?

powthing :: Int -> Int

Unless you have a good reason to prefer Int over Integer (like you know for a fact that your use of Integer is causing performance problems in your code and that all your values fit into the Int range), I'd recommend either using Integer or keeping the function polymorphic.

sepp2k 378 Practically a Master Poster

Let's say you have 25c and can use the 10c and 5c coins. Since 15 is greater than 10, you may use a 10c coin. If you do, you have 15c left (for which you again may or may not use another 10c coin). If you don't, you still have 25c left, which you pay with 5c coins.

So the total number of ways to give change for 25c using 10c and 5c coins is the number of ways you can give change for 25c using only 5c coins (n[0][25]) plus the number of ways you can give change for 15c using 10c and 5c coins (n[1][15]).

sepp2k 378 Practically a Master Poster

copyOfX is not a copy of x - it's a reference that refers to the same object that x refers to.

If you want a copy, you need to explicitly create a copy, for example by invoking a copy constructor (assuming one is defined) like this:

X copyOfX = new X(x);
sepp2k 378 Practically a Master Poster

When declaring a variable, preceding the variable name with a * makes it a pointer. Preceding it with a & is illegal (in C - in C++ it makes the variable a reference).

In an expression & takes a variable and produces a pointer to that variable, whereas * takes a pointer and produces the value that that pointer points to.

sepp2k 378 Practically a Master Poster

Yes, exactly.

It should be noted that this is generally seen to be very confusing behavior and most modern languages don't support dynamic scope and those that do don't tend to use it as the default.

sepp2k 378 Practically a Master Poster

what is meant by static scope and dynamic scope?

The scope affects whether a given variable can be accessed from a given location in the code and, if there are multiple variables with the same name, which one is meant by a given use of the name.

Using static scope you can use variables that are defined globally or inside a code block that you're nested in. This means that, when using nested and/or anonymous functions (in languages that support them), such a function can access variables of the enclosing function even after that function is no longer running. If multiple variables with the same name could be accessed by that logic, the inner-most one is chosen.

Using dynamic scope you can use variables that are defined globally or which exist in any activation record currently on the stack. This means that you can access local variables of another function if (and only if) that function called you. If multiple variables with the same name could be accessed by that logic, the one that has been most recently pushed onto the stack is chosen.

Suppose i have 2 functions (main) and (fact).The activation record for main knows the address of fact so that it can call it.How? Do main and fact functions link before compiling?

(Assuming the functions are defined in different compilation units - otherwise linking is irrelevant) No, linking happens after compilation. The address of fact is simply not known until linking. Before then, …

sepp2k 378 Practically a Master Poster

The relation is that the record is stored on the stack.

PS: This happens for all function calls (that aren't inlined by the compiler), not just recursive ones.

sepp2k 378 Practically a Master Poster

Are you sure that the warning doesn't say "makes integer from pointer" rather than "makes pointer from integer"? I have no idea why it would say the latter for your code and my compiler at least says the former.

The reason for that warning is that the elements of argv are pointers (char pointers to be specific), not integers and thus assigning one to an integer is likely a bug (and it definitely is in this case).

sepp2k 378 Practically a Master Poster

The difference between a deterministic Turing machine and a non-deterministic one is explained on the wikipedia article about non-deterministic Turing machines.

In short a non-deterministic Turing machine can be in multiple states at once whereas a deterministic one can only be in one state at a time.

sepp2k 378 Practically a Master Poster

For NP hard problems is it proved that there no algorithms that can solve this problem in polynomial time?

For some it is. That is there are problems that we know can't be solved in polynomial time and those problems are NP-hard but not NP-complete. There are also NP-hard problems where we don't know whether they're NP-complete. And then there are the NP-complete problems where we know that they can be solved in polynomial time if and only if NP=P.

For NP complete problems there have not yet been algorithms that can solve this problems in polynomial time but you cannot completely rule out the possibility.

Right.

If the NP-complete problem is solved, all subsets will be solved as well, including NP-hard.

NP-hard isn't a subset of NP-complete - it's the other way around.

sepp2k 378 Practically a Master Poster

P problems are problems that can be solved by deterministic Turing machine in polynomial time.

NP problems are problems that can be solved by a non-deterministic Turing machine in polynomial time.

Since a non-deterministic Turing machine can always be at least as efficient as a deterministic one, it follows that all P problems are also NP problems. So NP is a superset of P. It is not known whether NP is a strict superset of P, but that is generally believed to be the case.

So NP hard problems are those the scientists believe they can't be solved in polynomial time(no algorithm to solve it in polynomial time ).

NP hard problems are believed to not be solvable in polynomial times (by deterministic Turing machines), but that is not their definition. That is if computer scientists' believes change that will not change which problems are NP-hard.

NP complete problems are problems that haven't been solved by any algorithm in polynomial time but can be solved in polynomial time(open research field).

It is not known whether NP-complete problems can be solved in polynomial time. If they can this would mean that P=NP. This is generally believed not to be the case.

To be summarize: there are some NP-hard problems that are in NP (which we call NP-complete) and some which are not. Those that are may or may not be solvable in polynomial time (but probably not). Those that are not, can definitely not be solved in …

sepp2k 378 Practically a Master Poster

I know that NP problems can't be solved in polynomial time.

Since NP is a superset of P that can't possibly be true for all problems in NP. It is likely that NP-hard can't be solved in polynomial time, but that's an open question (NP ?= P), which has not yet been proven either way.

But what do NP complete and NP hard problems mean?

An NP-hard problem is a problem that is "at least as hard" as all other problems in NP. That is the most efficient solution for an NP-hard problem can't be more efficient than that for any other NP problem. Note that this includes problems that are not themselves in NP.

An NP-complete problem is an NP-hard problem that is in NP.

sepp2k 378 Practically a Master Poster

You don't even have a copy constructor. So of course all the "copies" of your list are going to have a pointer to the same memory. You should read up on pointers and copy constructors.

PS: I would have appreciated it if your minimal but complete code sample (which looks a lot like it's your actual code unmodified to be honest) had been a bit more minimal.

sepp2k 378 Practically a Master Poster

On line 6 you call push_back with an RBTree as its argument, so does that mean that you changed persistenList to a vector<RBTree>? If so, you don't need to mess around with pointers at all, so you should just get rid of all that.

Especially since it's dangerously wrong. On line 2 you allocate memory on the heap and store the address in myRBTreeCopy. On line 3 you take the address stored in myRBTree and store it in myRBTreeCopy, discarding the heap address from line 2. So that memory is now leaked. On line 9 you delete the memory address stored in myRBTreeCopy, which is the same as myRBTree. If that address is not allocated with new, that's undefined behavior right there. And even if it is, that's undefined behavior on the second iteration of the loop since you're deleteing the same address multiple times.

If you have further problems with your code, I'd appreciate a minimal but complete code sample, that is one that compiles without error and when run exhibits the problem you're asking about.

If you're using a vector<RBTree> and you see the same problems as in the beginning, that might mean that your copy constructor is messed up, so I can't help you until I see that.

Yes, I know that an RB tree is not a linked list, but I need to stick in this notation for certain reasons.

So your RBTree class is really a linked list class, but you need to …

sepp2k 378 Practically a Master Poster

In your example code you only push one tree into the vector, so that's not enough to reproduce your problem. My guess is that in your real code you have a loop around lines 3 to 10. So at the end of each iteration of the loop, myRBTreeCopy goes out of scope and the pointer you stored in the vector becomes invalid. Then dereferencing that pointer later invokes undefined behavior. Since each instance of the my RBTreeCopy variable presumably reuses the same memory address on your implementation, this undefined behavior manifests itself as all pointers pointing to the same tree.

The easiest way to fix your problem would probably be to store the trees directly in the vector instead of using pointers. If that's not an option, you'll need to use new to allocate the trees on the heap.

PS: An RB tree is not a linked list.

sepp2k 378 Practically a Master Poster

I'm not quite sure I understand the question. You're simulating the game of life, so, at each step, you have a grid containing the live and dead cells, right? So just iterate over that grid and count the cells that are alive.

sepp2k 378 Practically a Master Poster

The third error is telling you that the first two branches of your case return a list (nil or y/z respectively) while the third branch returns a Cons.

And the fourth error tells you that z is a list, but you treat it as a Cons.

The reason that it thinks that z is a list is because you do x=z, meaning x and z need to have the same type and then y else z, meaning y and z also need to have the same type.

To fix this, you should first decide what each of the arguments' types should actually be and what you want the function to return. Once you decided that, you can go through the code to find the places where each value is used with a type other than the one it should have and fix those.

sepp2k 378 Practically a Master Poster

Cons is just the name of your type - you do not have a constructor named Cons. So writing Cons(something) doesn't work. You probably meant List(x1, x2).

sepp2k 378 Practically a Master Poster

The error is because you call sub2 first with hd z and then with tl z as the third argument. hd z and tl z have different types: If z is a list of foos, then hd z is a foo and tl z is a foo list. For foo and foo list to be the same type, foo has to be equal to foo list, which is impossible. That's the circularity the error message is complaining about.

Another error (not mentioned by the error message) is that the types of your patterns don't match: Your first pattern is nil, implying that z is a list (which it certainly is since you use hd and tl on it later). However your other two patterns are of some other type.

sepp2k 378 Practically a Master Poster

If the user enters a number too large to fit into a long, LONG_MAX will be returned. If the user enters a number that's exactly LONG_MAX, LONG_MAX will also be returned. The first would be an error case, the second would not (in the case that sizeof(int) == sizeof(long) - otherwise LONG_MAX would still be too large to fit into an int and the problem would be caught by the <= INT_MAX check). So we need errno to distinguish between these cases.

deceptikon commented: Good explanation. +13
sepp2k 378 Practically a Master Poster

Right now put your each loop before return false and it'll work. The only thing left to do is to add the check for 0 to your if.

sepp2k 378 Practically a Master Poster

Okay the reason that the second code always returns false is that you misspelled elsif as elif. This way Ruby doesn't recognize that there's a second condition to be checked. Once you fix that, the behavior should be the same as the first code (i.e. still wrong) for the reason that I already explained.

sepp2k 378 Practically a Master Poster

Note though that this will always return true if the array is empty - not just when n is 0. You should check for n being 0 as well to avoid this.

sepp2k 378 Practically a Master Poster

That looks fine assuming the syntax errors (Return and True being capitalized) were just typos when copying the code here.

Where did you put the code? If you put it inside the each, it won't work because, as I said, the each won't run if the array is empty.

If you put it after the return false, it won't work because the method is done once return false is reached.

So put it at the beginning of the method and everything should be fine. If that does not work please post your current version of the code.

sepp2k 378 Practically a Master Poster

The only return true is your code is on line 5. That line is inside the each, so it can only be reached if the array is not empty (otherwise the code in the each does not execute at all). If the array is empty it goes directly to line 10 and thus returns false.

If you want to return true for an empty array if the argument is zero, you need to add a check whether the array is empty and the argument zero and return true in that case.

If you also want to return true if the array contains the element n (and thus no adding is necessary), you could just append the element 0 to the array before the each. That way no special case would be necessary and sum_to_n?([],0), sum_to_n?([42],42) and sum_to_n?([23,13,7],13) would all be true (if that is indeed the intention).

sepp2k 378 Practically a Master Poster

'^' is a special character ONLY when used as the first character within square brackets

^ means "beginning of the line" outside of square brackets, so it does need to be escaped.

You're right about _ though.

sepp2k 378 Practically a Master Poster

The only statement inside your for-loop is prod = prod*even;. The others are outside of the loop. The braces around your for-loop do not affect what's inside the for-loop - they only introduce a new local scope. To affect the extent of the for loop, the opening brace should be after the for(...).

sepp2k 378 Practically a Master Poster

Instead of hardcoding "Ted.txt" like that you can also use a command line argument as James suggested (which will be accessible through the array that's passed as an argument to main). Command line arguments can be passed through Eclipse (in the run configuration there's a text field for them) or through the command line (by writing java SetTest Ted.txt). So you don't have to change your code and recompile any time you want to use a different file.

sepp2k 378 Practically a Master Poster

< is the input redirection operator in the shell. When you write java SetTest < Ted.txt, you're executing the command java SetTest without passing any arguments to SetTest and feeding Ted.txt into the program's standard input stream. This is not possible to do in Eclipse.

If you want to read from a file, you should pass a File object as the argument to Scanner's constructor rather than System.in. That's what James meant by "create your Scanner using the file directly".

sepp2k 378 Practically a Master Poster

{0} and {1} are placeholders for the second and third arguments to WriteLine respectively. So {0} is replaced with the value of n (the second argument) and {1} is replaced with the value of Fib(n) (the third argument). So Console.WriteLine("F({0}) = {1}"**, n, Fib(n)) will print F(3) = 6 when n is 3.

For more information see Composite Formatting (MSDN).

PS: If that wasn't your question, please clarify.

sepp2k 378 Practically a Master Poster

pass in the arrays as references: int Checker(char *s,char *u)

The signatures int Checker(char s[], char u[]) and int Checker(char *s, char *u) are 100% equivalent. It is not possible to pass C arrays by value.

Also note that s and u are pointers, not references. A reference to an array would look like int (&s)[N], but that construct is not commonly used.

sepp2k 378 Practically a Master Poster
  1. If by "const" you mean macros (as opposed to a variable that has been declared const), it's convention in C and C++ for macro names to be in ALL_CAPS to easily distinguish between macros and functions/variables.

  2. That's a matter of style and really up to you (or your team's style guidelines if you're a part of a larger team).

sepp2k 378 Practically a Master Poster

Okay and what about my other questions? What does line 29 in main.cpp look like? If you replace NewLine with std::endl, does that make a difference?

From the error message it looks like the problem might be because endl is an overloaded function, so the compiler does not know which overload (and thus which type) to pick. This would be unrelated to the fact that you defined a NewLine macro.

To fix this you could define NewLine as a function that explicitly takes an ostream as its argument. Or you could just use cout normally and not bother with any of this, which is really the preferable option if you ask me.

sepp2k 378 Practically a Master Poster

Where and how are you using NewLine? If you replace your use of NewLine with std::endl, does the error disappear? If so, that would kind of surprise me to be honest. If not, your problem has nothing to do with the fact that you defined NewLine as a macro.

Also what you posted isn't an error message. It's a note that gives you additional information about the preceding error message. Please post the whole error message and any notes attached to it.

sepp2k 378 Practically a Master Poster

There is one situation where that line could cause a segmentation fault by itself: a stack overflow. If your stack grows beyond the size it's allowed to have and index is the first variable that you assign to whose address is outside of the legal bounds of the stack, then the first assignment to index will cause a segmentation fault.

The most likely scenario where that could happen is if you have a stack-allocated array that is too large for the stack and the index=0 happens before you assign anything to the array. Maybe you want to initialize the array using a for or while loop and you create the index variable just for that purpose.

sepp2k 378 Practically a Master Poster

why it the program won't stop until the 3 variable is already greater than 70??

Because your while condition says the the program should loop while either of the variables is less than 70. Let's say your variables are a=77, b=70, c=67. Is the statement "a is less than 70, b is less than 70, or c is less than 70" true? Yes because c is less than 70 and "false or false or true" is true.

If you want it to stop when either variable is greater than or equal to 70, you need to loop while all of them are less than 70. So you need to replace your ors with ands.

sepp2k 378 Practically a Master Poster

If you read the comments on the accepted answer, you get the whole story: The only native code in .net .exe is a single jump instruction. This native code is ignored on both modern Windows systems and on Mono. It's only relevant for Windows systems that predate .net.

None of this affects the compability between .net and mono. As far as that is concerned, all of this is irrelevant information.

sepp2k 378 Practically a Master Poster

Do you think it's true or false? What have you trieed to prove or disprove it? What's the definition of big-Oh - how would 2^(2^n) need to relate to 2^n for 2^(2^n) to be in O(2^n)?

sepp2k 378 Practically a Master Poster

As I said there's only one function parameter, so no commas. If you write soemthing like int arr[42], you don't have a comma in there either.

sepp2k 378 Practically a Master Poster

the template have 2 parameters, but you transform 'N' in second parameter

What do you mean by "transform" here? I take N as the second template parameter.

you don't use comma

There's a comma between typename T and size_t N - where else do you expect a comma?

Just to clarify: There are two template parameters (T and N) and one function parameter (arr, which has type (T&)[N], i.e. "reference to an N-element array of Ts"). The template parameters are separated by commas. The function parameter is not as there's only one.

sepp2k 378 Practically a Master Poster

In a parameter list T arr[] is the same T* arr, so arr is a pointer. The only way to pass an array to a function is by reference. To declare an array reference you'd write T (&arr)[42] where 42 is the size of the array. In this case you'd probably want it to work with any size of array, so you'll need to make the size a template parameter like this:

template <typename T, size_t N>
void print_array(T (&arr)[N]) {
    ...
}

Now you don't even need to use the sizeof trick to get the length of the array, you can just use N.

The downside of this approach is that you'll only be able to use the method with variables of actual array types, so you won't be able to use it with arrays that have been allocated using new (as those would only be accessible through pointers - not directly through a variable). Of course that is true when using the sizeof trick as well.

If you want your function to be able to work with newed arrays too (or generally with arrays that you only have a pointer to), your only option is to take a pointer as your first argument and the size as the second.

sepp2k 378 Practically a Master Poster

To answer the question in the title: There's no way to find out the number of elements in an array if you don't know its type, but it's also not possible not to know the type of an array unless you only have a (void or otherwise casted) pointer to the array. And if you only have a pointer, you can't get the size anyway (regardless of whether or not you know the type).

It's not possible to get the size of an object from its type's typeinfo object.

heres the array:
int b[5]={20, 30, 40, 2};
length=sizeof(arr) / sizeof(arr[0]);

What's arr? You only showed us the definition of b.

why i don't get 5, but 1?

Presumably because arr is just a pointer, not an array, but we'd have to see the definition of arr to know for sure.

@UFOOOOOOOOOOO: That doesn't help you get the number of elements in an array though, which is what the OP wanted.

sepp2k 378 Practically a Master Poster

"So if you want a function to be inlined across compilation units, you should put the function definition in the header as either static or extern inline."

Please explain this line little bit more.

If you structure your code like this, fun will likely not be inlined:

// fun.c

int fun(int x) {
    return x;
}

// fun.h

int fun(int);

// main.c

#include "fun.h"

int main(int argc, char **argv) {
    fun(42);
    return 0;
}

But if I do it like this:

// fun.h

static int fun(int) {
    return x;
}

// main.c

#include "fun.h"

int main(int argc, char **argv) {
    fun(42);
    return 0;
}

Or like this:

// fun.c

int fun(int x) {
    return x;
}

// fun.h

inline int fun(int) {
    return x;
}

// main.c

#include "fun.h"

int main(int argc, char **argv) {
    fun(42);
    return 0;
}

Then it probably will be.

Note that in my previous post I said "extern inline", but it should just be "inline" in C99 - confusingly the meanings of "extern inline" vs. "inline" are switched around in GNU C89 (i.e. C89 plus GNU extensions, which is the default language of gcc) vs. C99. For more information see this SO question.

sepp2k 378 Practically a Master Poster

One thing that should be noted is that the compiler can inline functions even when they're not declared as inline when it thinks doing so will be appropriate. So for the most part the keyword is unnecessary.

Another thing to keep in mind is that, without link-time optimization, the compiler won't inline a function if the function's source code is not available in the current compilation unit (i.e. if the function definition lives in a different .c file than the call). So if you want a function to be inlined across compilation units, you should put the function definition in the header as either static or extern inline.

sepp2k 378 Practically a Master Poster

You should never prefer macros over inline functions. The only case where you'd use macros would be if you're writing for a compiler that does not support inline functions¹ or you're using macros as something other than a poor man's inline functions² - though even then you should think about whether you couldn't implement the desired functionality in pure C before you decide to use macros.

¹ Standard C did not support inline functions until the 99 standard though some compilers supported them as an extension before then.

² For example gtk/gobject uses macros to implement its object system. QT (though that's C++, not C) uses macros (in combination with its moc preprocessor) to implement its signals and slots. Its foreach loop is also implemented as a macro.

nitin1 commented: from where you guys come to know about all these things, you are filled-up with loads of knowledge. :D +3
sepp2k 378 Practically a Master Poster

An algorithm that accesses memory in a pattern that's cache friendly will be faster than one that does. People optimize their algorithms for that. That's a real thing.

Saying it doesn't matter because that's machine-specific is just silly. If my program runs much slower than another program because I didn't bother with cache friendliness the customer won't care if I say that that's a hardware issue, not an issue with my program's algorithm (especially since that's not true by most people's defintions of those terms) - they'll just care that the program runs too slowly.

I should also point out that your proposal of simply summing up each instruction's CPU cycles is hardware specific too (different instructions need a different amount of cycles on different architectures), so I don't understand why that wouldn't also qualify as "testing hardware".

sepp2k 378 Practically a Master Poster

You said this twice now, but you haven't really explained it. For me "testing hardware", in this context, would mean that I take multiple pieces of hardware and find out which one gives best results. While "testing code" would mean that I take multiple pieces of code and see which one produces the best results. Using those definitions I'm clearly testing the code in the scenario described above.

Clearly you're using different definitions and that's fine, but it doesn't really matter what you call it. If one piece of code consistently performs better than another piece of code, then I'm obviously going to prefer that (ignoring for the moment factors like code readability or maintainability as that's not the discussion we're having right now). It does not matter whether I found this out by "testing hardware" or by "testing software".

sepp2k 378 Practically a Master Poster

If I run both pieces of code on the same machine and one is consistently faster than the other, how is that measuring the machine speed? I'm measuring the speed of the pieces of code on this particular machine.

It should also be noted that reading two integers that are next to each other will be faster than reading two integers that are sufficiently far apart will be faster on every CPU that has a cache - the only difference is how far "sufficiently far" is and how big the difference is going to be. So this isn't just a quirk of a particular machine.

sepp2k 378 Practically a Master Poster

None of that is relevant when comparing the speed of various algorithms on the same computer.

Yes, it is. As an example consider two pieces of code: Both read two ints from memory and add them. The only difference is that in one piece the two ints are next to each other in memory and in the other they're far away. Clearly both pieces of code will use the same machine instructions - only with different arguments. So if you only add up the cycles of the instructions, they should be equal in speed. But they're not because one piece of code only causes one cache miss while the other causes two.

He's not trying to find out whose computer is faster, but which algorithm is faster.

I know that.

sepp2k 378 Practically a Master Poster

just sum up the clock cycles for each machine instruction.

As I said, that won't take into account caching, branch prediction or pipelining. I'm not saying that those factors are relevant in this specific case, but in general they can be.