I do not understand this question??

Using Big-Theta notation to classify the traditional grade school algorithms for addition and multiplication. That is, if asked to add two numbers each having n-digits, how many individual additions must be performed? If asked to multiply two n-digit numbers, how many individual multiplication are required?

Could someone please help with this?


First, do you understand theta notation? Do you understand it well enough to tell me, using theta notation, how long this algorithm takes to run?

for i = 1 to n
  for j = 1 to i
    sum = sum + 1

I'll assume yes.

So, when adding two seven-digit numbers together, e.g.

+ hijklmn

How many times do you add two one-digit numbers? (It depends on how many times you have to carry, but there's an upper limit and a lower limit.)

So, my question for you is, and this is possibly a hard question, but one you need to be good at answering in order to be good at thinking about hard problems: What, precisely, do you not understand?

Thanks for the reply Rashakil Fol, The problem is that I do not know how to apply the big-O to this problem, perhaps if you could give me an example, I will appreciate it.


Well, I know that. Why can't you solve this problem? Where in the solution process do you go, I don't know how to do this? You can't just draw a blank. That's not how problems get solved. Do you understand what the problem's asking you to do? Yes? No?

If yes, then do you know what big theta notation means? Yes? No?

If yes, then do you know how to calculate the minimum and maximum possible number of addition operations when adding two numbers? Yes? No?

If yes, then can you express those bounds using theta notation? Yes? No? (Here, "No" might mean that it's just impossible.)

Which part can't you do? That's the part you need to identify.

I think Rashakil Fol, it is the part where expressing those bounds using theta notation? I need help in here, this area.

Thanks, and sorry, I am new at this.

Well, what are the bounds? If the numbers have length n, then the number of addition operations you have to make is between n and 2n-1, right?

So, to find theta notation for this growth, we try to find some function, f(n), where a*f(n) < number-of-addition-operations < b*f(n), where a and b are positive numbers.

That function would be f(n) = n, with a = 0.5, and b = 2. (Or you could let a = 0.9, b = 999, if you wanted.) Anyway, looking at large enough values of n, we know that the number of addition operations is greater than 0.5*n, and less than 2*n. This means that Theta(n) would describe the number of addition operations, where n is the length of the two numbers.

For multiplication, you undergo a similar procedure.

Sorry for the late reply.... Thanks for the help Rashakil Fol I've started to read about this more, and your help is really appreciated in here.

Thanks again.


The best example of Big-O I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learnt in school were:
multiplication; and

Each of these is an operation or a problem. A method of solving these is called an algorithm.

Addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The 'tens' part of that number is carried over to the next column.

Let's assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.

See the pattern? The complexity (being the number of operations) is directly proportional to the number of digits. We call this O(n) or linear complexity.

Subtraction is similar (except you may need to borrow instead of carry).

Multiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.

If we have 2 100 digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.

As the algorithm scales with n-squared, this is O(n2) (N squared) or quadratic complexity.