TestCenter Reference
TestCenter Reference Documentation

These are the reference pages for the TestCenter. The TestCenter is an easy to use system for writing tests in MeVisLab. Tests are similar to modules, i.e. they have a definition file, a script file and optionally a network. The script file contains a set of test functions that are executed automatically. The Python scripting language is used for specifing tests.

For a more detailed introduction to the TestCenter have a look at the "Getting Started" document. Example TestCases can be found in the "MeVisLab/Examples" package (use the TestCaseManager module to load, view and run them).

Generic vs functional testing

The TestCenter supports two general test types: "generic" and "functional" tests. Functional tests are used to verify specific functionality of the scripting API, a single module or a network for example. While a functional test case is called once a generic test case will be called for each module of a set. Generic testing is a very special aspect of testing and won't be used much, but it allows for testing specific features for a larger set of modules with a single report for each of the modules.

Examples for generic test cases are verifying module's meta information or checking that field names are valid.

Test Functions

The building blocks of a test case are the test functions. These allow a structuring and can help to get a more meaningful report as problems can much easier be related to certain operations.

The following subsections will describe how to create different types of test functions and how to apply an ordering on these test functions.

Single test functions

A single test function is the most basic form. The name of these test functions must start with the "TEST_" prefix. The following string will be extracted and used as the function's name in reports with underscores ("_") being replaced by spaces.

The following example will define a test function with the name "Single Test Function" that will actually do nothing.

def TEST_Single_Test_Function ():
  pass

Please note that test functions must not have any parameters and return-values are ignored!

Grouping Test Functions

It's possible to group a set of single test functions. This can help to organize test cases with many functions. A group is built from a set of existing test functions by returning the list of function objects in a method with the "GROUP_" prefix. The following example will add the three single test functions "TEST_TestFunction1", "TEST_TestFunction2" and "TEST_TestFunction3" into the group "GROUP_Test_Group".

def GROUP_Test_Group ():
  return (TEST_TestFunction1, TEST_TestFunction2, TEST_TestFunction3)

Iterative test functions

Sometimes it's required to run the same function on different entities. For example to verify an algorithm works for manifold input images. This can be achieved using iterative test functions. Internally a list of virtual functions will be built that maps function names to a real (non test) function with appropriate parameters.

The definition of such a iterative test function is split into two parts. A function is needed that will inform the TestCenter that a certain function should be called for different input data. This is done using a special function with the function name having the "ITERATIVETEST_" prefix. The base name of the virtual test functions is determined by the string following the underscore.

The "ITERATIVETEST_" function must return two values with the first being either a list or dictionary and the second a function. The given function will be called with the parameters specified in the first object. If the first object is a list its items are the parameters passed to the actual test function with the virtual name being extended by the index of the list item. If it's a dictionary the keys are appended to the virtual function name and the values being passed to the test function.

This first example would generate three virtual functions named "Simple_Iterative_Test_0", "Simple_Iterative_Test_1" and "Simple_Iterative_Test_2" that would call the function actualTestFunction with the parameters "first", "second" and "third".

def ITERATIVETEST_Simple_Iterative_Test ():
  return ["first", "second", "third"], actualTestFunction

def actualTestFunction (parameter):
  MLAB.log(parameter)

The second example would generate the three virtual functions "Simple_Iterative_Test_One", "Simple_Iterative_Test_Two" and "Simple_Iterative_Test_Three".

def ITERATIVETEST_Simple_Iterative_Test ():
  return {"One":"first", "Two":"second", "Three":"third"}, actualTestFunction

def actualTestFunction (parameter):
  MLAB.log(parameter)

If you need to pass more parameters the list returned in the constructor function would contain lists of parameters and in case of a dictionary the values would be lists.

Field-Value Test Functions

The concept of field-value test cases is often required for testing (see the The TestSupport package section or the Field-Value Test Cases page). Therefore there is a simple mechanism to run those:

def FIELDVALUETEST_A_Field_Value_Test ():
  return os.path.join(Base.getDataDirectory(), "test.xml"), ['test1', 'test4']

This method would call the field-value test cases "test1" and "test2" that must be specified in the test.xml file. If the list of test cases is not given or empty all available test cases will be executed.

Unit test wrapper functions

Unit tests implemented with Python's unittest module can be integrated into the TestCenter. The unit tests can still be executed in their own in pure Python, or along with high-level tests.

The integration follows the the pattern used by iterative tests. You need to implement a wrapper function with the prefix "UNITTEST_" that returns a unittest.TestSuite. All test functions inside that TestSuite, and possibly nested TestSuites, are added as functions to a group with the name of the wrapper function.

from backend import getSuite

def UNITTEST_backendUnitTests():
  return getSuite()

Creation of virtual test functions

The GROUP, ITERATIVETEST, UNITTEST and FIELDVALUETEST methods are evaluated first to generate a list of virtual test functions. It's possible to change field values to generate the required parameters, but the changes are reverted afterwards, i.e. test functions should not rely on values set here!

Ordering of test functions

If the test functions must be called in a specific ordering the "TEST_", "ITERATIVETEST_", "UNITTEST_" and "FIELDVALUETEST_" prefix can be extended to have a substring defining that ordering, i.e. someting like "TEST005_Fifth_Test_Function" and "TEST006_Sixth_Test_Function". The ordering string will be removed and not be part of the test function name appearing the report.

Please note that the actual test function names (everything following the first underscore) must be unique, i.e. two functions with the names "TEST001_test_function" and "TEST002_test_function" are not allowed as the function names in reports would be equal.

Testing and status

Testing requires generation of a status for test functions, i.e. one would like to verify that results have a certain values and in case of a mismatch mark the test function as having failed. The TestCenter uses MeVisLab's debug console to achieve this. All messages going to MeVisLab's debug console will be collected for each test function. The type of a message will determine the status, i.e. if messages of type error are logged to the console the test will be marked as failed. But the TestCenter allows for a more detailed status classification than just passed and failed. Test can have one of the following status:

  • "ok": there have only been messages of type info.
  • "warning": at least one message was of type warning.
  • "error": there have been messages of type error.
  • "timeout": the TestCenter will abort the testing if a certain amount of time has passed.
  • "crash": the TestCenter will detect failure of tests. The last two statuses are necessary to prevent the system from failing in case of crashes or infinite loops.

The status of a test case is determined by the worst status the test functions have.

Setup and teardown of a test case

Sometimes it's necessary to have special setup and teardown methods that will initialize some sort of fixture. This can be done defining the methods "setUpTestCase" and "tearDownTestCase". Those will be called prior and after calling the test functions. Messages generated in these functions will be added to the following or preceding test function.

Please note that after crashes the TestCenter will restart MeVisLab, retry to execute the failed function and afterwards run the remaining test functions. Prior to calling the first function "setUpTestCase" will be called again. Thereby the environment is set to an expected state.

The definition of a new test

First thing to create for a new test is a definition file like in the following example:

FunctionalTestCase SimpleTestCase {
  scriptFile = "$(LOCAL)/SimpleTestCase.py"
}

The filename referenced with the "scriptFile" tag contains the test functions of the test case. The following tags are supported:

  • author: Who wrote the test case.
  • comment: What is the intention of the test case.
  • timeout: How long should the test case take at most. After this amount of time passed the test case will be cancelled and marked as timed out.
  • testGroups: which groups will the test case belong too. This can be used to filter out certain tests. In automatic testing all tests being in the "manual" group will be excluded.
  • dataDirectory: where is required data located. This allows for having test data like input images or ground truth data at outside the test case directory. But keep in mind that using this feature would prevent the test from running in an automatic fashion! The test cases must be located in a packages TestCases directory to be found correctly. The is no defined structure inside these directories but it seems useful to have an ordering that is similar to the modules directory.

The "associatedTests" tag can be added to a module's definition to specify the list of test case names that are testing features of this module. This can be used to run all tests that are using this module easily.

The TestSupport package

There is a python package to support developers in writing tests. It contains lots of function to return important data (like the context of the current test case or the path to the report directory) or help finishing certain tasks (like creating screenshots). Have a look at the TestSupport reference documentation to get a better idea what it is all about.

There are two additional python modules that are not exactly part of the TestCenter but should be mentioned to, as they help developing test cases a lot:

  • Often there is a need to set a lot of fields to certain values, apply some triggers and afterwards verify that some fields have certain values. This is what the field-value test cases are about. There is a special module (the FieldValueTestCaseEditor) to create such parameterizations for a network and save them into a XML file. For more information have a look at the FieldValueTests python module (also see Field-Value Test Cases page).
  • As changing fields changes the environment it would be nice if the changes could be reverted. This can't be done for every field as triggered action can't be reverted easily but the initial state of the network could be reached by setting all changed fields back to their initial value. The ChangeSet class achieves this by storing the original field value of fields changed through it's interface. With destruction of objects of this class the fields are reset.