0

Hello,

I am not sure if this is the correct place to post this...

I would like your general opinions on this:

When testing a software with different functionalities, is it a good thing to state the number of test case to be prepared initially for each functions? Like we will prepare 100 tests cases for the whole system and 10 test cases for each functionalities(taking into account that there will be 10 functionalities)

That's not the way I do it when I test my system as I created the test case depending on the use case specification. I would like to know you thoughts on this.

3
Contributors
4
Replies
12
Views
3 Years
Discussion Span
Last Post by mike_2000_17
0

I typically separate testing into two phases:

  1. Unit testing for basic functionality and internal correctness that isn't easily checked by the end user or needs to be more thorough at a lower level.
  2. UAT testing based on user stories from the design notes. These are cases where there's a clear input and output for the end user.
0

deceptikon, so do you rekon it's a good thing to set a number of test cases to be prepared?

0

I think setting an arbitrary number of tests is silly. If you can get away with one test case, that's fine. If you require hundreds to properly ensure correctness then so be it. The number of test cases should be determined based on what you're testing, why you're testing it, and how detailed you want the results to be.

0

It depends on the code that you are testing. In general, it's a matter of how deterministic the code is, how sensitive to input it is, pathological cases or error conditions, and so on. An arbitrary number does not really make that much sense, but sometimes it boils down to a simple "more tests = more confidence that it works", and then you can just do as many tests as is necessary for you to be confident enough to release the code.

For example, if you have a simple algorithm like a matrix-vector multiplication, then testing it once is probably enough because the algorithm has no error cases (unless you want to handle overflows / underflows and other "rare" errors), has no pathological cases, is completely deterministic, and is not sensitive to the input (the algorithm is the same regardless of the actual input numbers).

On the other hand, if you are testing a matrix inversion algorithm, then you are going to need much more testing because it has error cases (singular matrices), the results are very sensitive to the input values (e.g., generation of round-off error), and the algorithm's execution depends on the input (i.e., it looks at data to make decisions). This means that you need to test it with a wider set possible inputs, including inputs that should fail (gracefully), and including any input that could be pathological (i.e., manufactured to throw off the algorithm).

And then, there are algorithms that are non-deterministic. They might use random-number generators to take stochastic steps. Then, there are also algorithms that have error cases that are very hard to see or manufacture. In these cases, you typically have to run full-blown randomized tests, which means that you have to write some code that verifies the correctness of the results, and then, generate a very large number of random inputs (or cases) to be given to the algorithm and then verify the results. Typically, the numbers of trials are between 1 million and 1 billion, to gain good confidence that it works, or at least, evaluate the expected failure rate of the algorithm.

But, again, this all depends on the nature of the code that is being tested. But at the very least, you need to have one test case for each possible branch in the code (to exercise them all), which includes "error branches".

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.