Boundary values in black box testing

In the paper “Testing software components using boundary value analysis“, Muthu Ramachandran demonstrates his experience of automating tests to study boundary value analysis on interfaces. He describes the fundamentals of black box testing approaches but at a somewhat higher level than I was hoping for.

He demonstrates that the input and output values should be derived from the specification given. Using a chart, he then derives the values and their potential boundaries before creating an arbitrary value to use for testing. He uses this to create a table with the columns Operation, Arguments, Type, Range, Value.

Having broken down his boundaries and potential values, he wants to use them for:

1. Range test to ensure that the boundary works with a minimum and maximum value,

2. Specific initialisation values, and

3. State based testing

The first of these is what I would consider real boundary value testing as it ensures that the given boundary works and that the state changes when the threshold is crossed and that edge cases work.

It feels to me as if the paper jumps equivalence partitioning, or it is being conflated with boundary value analysis and testing here. The second state is effectively testing the internal values to show that the expected behaviour holds true when tested for being true or false.

What I would like to see is the use of truth tables to show the various possibilities of testing the partition. Using these, the testing axioms can be written out and the minimum tests created to cover the equivalences and  partition values. It might be extended into the boundaries to ensure that the tests do cover the possibilities.

State based testing, not defined in the paper, helps with determining what valid states the components might be in, in what combination and what the outcome might be for each state change. Using these with flow diagrams to illustrate possible paths allows the tester to work out how the system might work.

Ramachandran does show that there are different types of tests (interoperability, functionality and so on) that the derived tests can be used to cover to ensure that the System Under Test functions as required from a black box testing perspective.

I would like to see some more formal methods in deriving the tests and types of tests. This might come from a strange desire to use some of the Z methods that I have been taught but it also feels appropriate. I have not read the full text of A Statistical Approach for Improving the Performance of a Testing Methodology for Measurement Software yet but from a first glance it heads into a more robust discussion of the generation of test data.

 

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.