AuburnMathTutor 37 Junior Poster

I thought C/C++ would offer a few benifits beyond pure assembler. Apparently I was wrong.

Languages that don't have this behavior make a great many common programming tasks harder, not easier.

AuburnMathTutor 37 Junior Poster

One particularly neat solution is to put the code in a function...

def getYN(prompt='> ', errormsg='This option is not available.'):
    result = raw_input(prompt)
    if result == 'y' or result == 'n':
       return result
    else:
        print errormsg
        return getYN(prompt, errormsg)

This has the added benefit of crashing the program if the user refuses to enter 'y' or 'n' about 1000 times.

AuburnMathTutor 37 Junior Poster

^ Well, one could be creative. Given two linked lists, you could append them in O(n), transform to an array in O(n), sort the array in O(n*logn), and convert back to a linked list in O(n). Really there are a thousand different ways to answer this question in the affirmative, and if you're not concerned with actual efficiency but with complexity... each is as good as the last.

AuburnMathTutor 37 Junior Poster

Sure.

You know the definition of an NFA, correct? You will need a structure that corresponds as closely as possible to this. In particular, you're going to need the concepts of:
- NFA
- State
- Transition
- Symbol
Java's OOP framework will make it easy to define each of these things in a vacuum. An NFA has a bunch of states, one of which is designated as a start state and some subset of which are accepting states. Transitions go between two (possibly non-unique) states and are activated on either one symbol or no symbol (lambda/epsilon transition).

Now to actually process stuff, you'll want a backtracking (I'd suggest recursive) algorithm that tries each of the relevant transitions at every symbol read. Of course you'll want to save the input because you might need it later... probably best to get the entire string you're testing first. Remember, an NFA accepts if there is any path that accepts, so you'll only need to keep exploring the possibilities until you find a hit.

An alternative is to construct the equivalent DFA from your NFA before testing, but that's more work. The benefit I guess is that if you will be checking lots of strings, this approach is almost guaranteed to be faster.

AuburnMathTutor 37 Junior Poster

^ Agreed. Perhaps a more direct hint:

You could just take the second list and stick it on the end of the first one. Can you sort a linked list in less than O(n^2)?

Note: Answering the question this way isn't as efficient, but it has as good a time complexity as you could hope for.

AuburnMathTutor 37 Junior Poster

^ Good point. An alternative: can you find where to insert elements of the small list inside the big list in less than O(n)? (HINT: The big list is sorted.)

AuburnMathTutor 37 Junior Poster

^ Rephrasing part of what he said, if you can make algorithm X run faster in VB by making some change to X producing X', odds are - if you changed something fundamental - X' will run faster than X in other languages, too.

AuburnMathTutor 37 Junior Poster

Well in my methodology sections I usually explain the approach (i.e. how I decided to do it before I did it and why) as well as the experimentation (i.e. how I actually did it after I decided how I was going to do it). You want people in theory at least to see why what you did answered the questions you set out to answer, and to be able to replicate exactly what you did in case they'd like to see it for themselves.

"3.2 Technical data
3.2.1 System specification
3.2.2 system development
3.2.2.1 System design "
Unless the technical data has to do with with parts or procedures you used in actually doing what you did, I'd put it in a separate section, possibly an appendix.

Just use your judgment.

AuburnMathTutor 37 Junior Poster

^ That's actually a very well presented description.

AuburnMathTutor 37 Junior Poster

"A civil and thought provoking debate"
- Yes, you definitely seemed to be the type for that, judging by your previous post, with the requisite irrelevant link to Wikipedia and lack of constructive contribution to my earlier post. And it's probably for the best we don't continue, because I actually don't speak any English, except what I have typed in this discussion and this sentence explaining it. Je suis le roi de France.

AuburnMathTutor 37 Junior Poster

^ More like a slippery slope, if that, but still.

And that being said, I don't think the comparison is so fallacious; he says that there is no reason a programming language should resemble natural language, and machine language is - IMHO at least - about as far away from natural language as you can get. I'd argue that it's not such a gross misrepresentation of his opinion at all. He says "There is no reason to make programming languages look like natural languages." I pointed out that we started with a language that was much farther from natural languages, i.e. machine language, and have since developed languages that are more similar in structure to English. Of course, I didn't make explicit the reason why we moved away from machine language - that should be obvious to anybody in a CS forum - but that reasoning completes a *valid* logical argument... perhaps not a correct argument, but if you want to go after the hypotheses than feel free.

A straw man would be to say that, if what he was saying were true, then nobody should use natural languages. That's a misrepresentation of what he said because he's talking about programming languages, not all languages.

But by all means, continue to post and I will continue to educate you, Radical Edward. And thanks for your valuable contribution to the discussion.

AuburnMathTutor 37 Junior Poster

Pseudocode is just free-form language that you use in a precise way to mimic some conventional programming language. Generally the only requirement is that it should be clear what you intend to do, all your symbols should be defined, and you shouldn't do anything "magical" (like have a line that says "find the inverse of matrix A" if the algorithm is supposed to be finding inverses of matrices. Basically, don't presuppose the conclusion, show all the steps, and explain how it can be done algorithmically. For example:

add integers a and b and put the result in c.
if c is at least equal to twenty then return true;
otherwise return false.

c <- a + b
if c >= 20 return true else return false

These are both valid examples of pseudocode. The point of pseudocode is that you can write programs without worrying about a specific language's funny syntax. You choose the syntax, make sure the semantics are relatively clear in the given context, and worry about getting the solution right; ideally, you should be able to produce real code from pseudocode without having too many "how on earth do I do step 7?" moments.

AuburnMathTutor 37 Junior Poster

^ Then why do we use words like "for", "while", "if", "then", etc.? Might as well use "f", "w", "i", "t", etc. And why use infix notation? Why not just do everything in postfix or prefix; that makes compiling easier, and no need for parentheses. In fact, better yet, why not just write everything in freaking machine language. No ambiguity there.
So in summary I completely disagree that there is no reason to make programming languages look like natural languages; there is, and the fact that we have languages with mnemonics and high-level-languages with English/Math like constructs is a testament to this fact.
And FWIW I'm using "ambiguous" in the technical sense of the word, not the usual sense.
And if you don't think that programming languages mimic English a little bit, consider the following programs and honestly tell me that a guy off the street would understand them equally well:

Program #1
if a < b then c = a + b else c = a - b

Program #2
mov eax, a
cmp eax, b
jge noadd
add eax, b
mov c, eax
jmp done
noadd:
sub eax, b
mov c, eax
done:
(for instance)

Program #3
01101010011010100110101001101010011010100110101001101010011010100110101001101010011010100110101001101010011010100110101001101010
(for instance)

Sorry, I don't see any merit in your POV at all.

AuburnMathTutor 37 Junior Poster

There is a balance between making programming languages look like natural languages and having them be as unambiguous as possible. A lot of Lewis Carrol's work should make it obvious that slight differences in English syntax can lead to quite different semantics. In fact English is not as forgiving as most people would have you believe; can you tell me what's wrong with the following sentence?

Mary didn't yet know if he will confront Bob and I about his prior indiscretions.

Hint: There's a lot wrong with that sentence.

AuburnMathTutor 37 Junior Poster

c3: will do (j+1) assignments (i) times. Therefore you only need to solve the double summation (sum of i from 1 to n of (sum of j from 1 to i of ((j+1)i)).

c4: will do j additions by two and j assignments. Therefore you only need to solve the double summation (sum of i from 1 to n of (sum of j from 1 to i of (2j)).

To solve a double summation, solve the inner summation first, treating the outer summation variable(s) as constant, and work your way inside out until all variables are gone and you have a closed form solution with no summations.

Hope this helps.

AuburnMathTutor 37 Junior Poster

Well, at some point you're going to need to do the following...
- define syntax
- define semantics
- write a formal grammar
- write a compiler/interpreter

Also, if you want your language to actually be used,
- port to a variety of popular systems
- have a genuinely good idea and reason for making a new language
- have something new to offer or do what is already done better
- provide support for writing all kinds of domain-specific apps... a standard library

AuburnMathTutor 37 Junior Poster

This is not a CS question, right?

AuburnMathTutor 37 Junior Poster

There are different degrees of portability. Probably the best you can hope for in terms of portability is a language that either runs on a virtual machine (then your code is as portable as there are virtual machines for different computers) or in an interpreter (ditto, for interpreters) and even then if you try to do hardware or OS specific things, it won't work. Examples:

portable: a Java program that has two integer variables declared statically at compile time, with values A and B, and the program computers the average and puts the answer in a third variable declared at compile time initialized to 0.

not portable: a program written in assembly language that controls a robotic arm and hand and saves the data in a parallel file system across a distributed network of heterogeneous hosts. Oh and it does some vector processing on the side.

AuburnMathTutor 37 Junior Poster

Well, technically, that code is Big-Oh of N^2 since T(N) is certainly bounded by some quadratic.

Of course, if M is less than a linear function of N (i.e. it is constant or polylogarithmic) then the bound will simply be the simplest form of O(MN) upon replacing M with its expression in N. For example,

M = 1,000,000,000,000,000,000,000,000,000,000 = const.
O(MN) = O(1,000,000,000,000,000,000,000,000,000,000 * N) = O(N)

M = log(N)
O(MN) = O(N log N)

If you have no information about how M depends on N, the best you can say is O(MN).

AuburnMathTutor 37 Junior Poster

Looks good to me.

AuburnMathTutor 37 Junior Poster

oops.............

AuburnMathTutor 37 Junior Poster

Depends on your interests, but here are some ideas.

Make up an interesting theoretical problem and try to solve it, giving any results you find. This is easier than you'd think. Here's a link to a problem I came up with, and you're free to use it provided (a) you give me a nod and (b) you let me know what you figure out.

http://www.python-forum.org/pythonforum/viewtopic.php?f=14&t=19215

There would be mathematical analysis, literature review, programming, and writing up results. It would make for a fine thesis, unless the problem is really easy to solve.

Or come up with your own problem, show that it's interesting, and try to find algorithms that solve it. If you can find fast ones, excellent. Otherwise, it may be possible to prove its difficulty, and that would be fantastic.

On the more practical side, anything with parallelism is golden.

AuburnMathTutor 37 Junior Poster

The latter analysis is correct; work from the inside out. The summation for k will not contain k in any of its summands, so it is easy to reduce. Ditto for the summation in j; the closed form of that summation will include only i and whatever you're using for the time of T. The summation for i will be somewhat trickier because i will appear in each of the summands, but there are well known formulae for evaluating such (relatively simple) summations.

AuburnMathTutor 37 Junior Poster

^ Right you are, invisal. The big-Oh time complexity of (1) is in fact O(n). It would be O(nlogn) if it were the following:

m := n
while n > 0 do
   L(m);
   n := n / 2

And the answers for (2) and (3) are right on the money. Good job!