NOTE Testing methods and techniques are also addressed in Software Testing Methods and Techniques.
This chapter recalls the major principles of the unit tests and extends the basic approach in the frame of the overall software development and validation logic.
The initial objective of Unit Testing is to check the correctness of the implementation of a software unit (at source code level) with respect to its definition into the corresponding SDD-DDD, i.e. the correct behaviour of a function and related data in the domain of its input parameters.
It consists at specifying tests in the SUITP (and then developing appropriate test code) which exercise the function to be tested in its data domain, and monitor that its behaviour and results are compliant with the expectations, as defined in SDD-DDD. When necessary, the lower level interactions need to be stubbed to simulate their behaviour, or replaced by already tested functions.
Unit tests are run as early as possible in order to gain sufficient confidence in the source code to allow starting integration then validation against the TS while minimising further debug effort.
Unit test is sometimes the only way to perform some specific validations, e.g.:
or some low level verifications, e.g.:
The code structural coverage technique (e.g. statements, decisions) is commonly associated to Unit Testing. It is recognised that achieving the code structural coverage objectives, largely contributes to verify one of the Unit Testing objectives (i.e. exercise the complete code). Moreover, the source code structural coverage measurement, fulfils an additional verification objective on the overall software, providing a higher confidence in the software reliability.
However, the code structural coverage objectives can be achieved by other means other than Unit Testing. Thus, as highlighted in the note of ECSS-E-ST-40C clause 5.8.3.5 b, this source code structural coverage analysis can be performed by running any kind of tests (e.g. validation tests), measuring the code coverage, and achieving the code coverage by additional (requirements based) tests, inspection or analysis. This approach is particularly efficient because the functional testing approach generally allows reaching quickly a high level of code coverage for a reduced effort. This strategy is recommended when using adequate validation facilities (i.e. allowing code instrumentation) and leads to largely reconsider the basic and systematic Unit Testing approach.
Moreover, achieving the code structural coverage contributes to find unused or unreachable code.
The systematic application of unit tests or the very deep unit tests (fine grain) require a huge effort, potentially not compatible with the project class, the project schedule, or project needs. Sometimes, the level of the unit tests needs to be balanced according to the criticality of a function, of a component or of the software itself: the granularity of the unit tests contributes to the expected software reliability level. Sometimes still, tests at unit level are fully redundant with other validation tests or it can be considered as more efficient to reduce the unit tests effort and to start validation earlier, while accepting discovered faults during the validation phase. Hence, the standard Unit Testing approach needs to be adapted or optimized; the overall objectives of Unit Testing introduced above, can be achieved through various combinations according to the project context and the software characteristics.
This part presents different areas for adapting or optimising the systematic Unit Testing approach, highlighting their strengths and weaknesses in order helping the assessment of the risks with regard to the project context. The adaptation or optimization of the Unit Testing approach need to be assessed early in the project according to the criticality level, the development schedule, or the deliveries expectations (i.e. the validation or reliability level of a delivered version). On the basis of this risk analysis, the Unit Testing strategy can be built and hence the way to achieve the Unit Testing objectives can be optimised. The defined strategy as well as the appropriate rationale, need to be documented (e.g. in the Software Development Plan or in the SUITP) and agreed together with the Customer and Quality representatives.
The unit test technique is based on the direct exercising of the source code, and particularly can use the opportunity of a “white box” approach. Hence, it allows an in depth assessment of the correct behaviour of a function. However, this requires a huge effort which may be balanced according to the overall project real expectations.
If a project considers, as a matter of tailoring, that the unit test effort needs to be focussed, then the criticality criteria may be used in the following way. The criticality level of software units is evaluated (e.g. according to ECSS-Q-HB-80- 03A). The unit tests required by ECSS-E-ST-40C 5.5.3.2c are applied to a different extent (e.g. a partial application of the requirement w.r.t e.g. testing depth) depending on the unit’s criticality or its contribution to critical functions. It is highlighted that ECSS-Q-ST-80C defines requirements relevant to software criticality classification; in particular, in order to classify software components at different criticality levels, software failure propagation between components of different criticality must be prevented (ECSS-Q-ST-80C, sub clause 6.2.3.1.).
The full application of the ECSS-E-ST-40C 5.5.3.2c requirement is particularly important for:
Specific testing methods and techniques that can contribute for the assessment of critical software products are presented in Annex B.
Unit testing is one of the means at developer level to reduce the risk to discover faults during validation. Other means are introduced here.
A first example is to reconsider the unit test objectives in the frame of the overall software verification and validation. Some unit tests are performed on selected units. Then functional tests are run, and structural coverage is measured. At this point, most of the source code functions are checked on a representative set of their input parameters. The weakness is that evidence is not provided that the set of input parameters is completely covered. To bring this evidence, and to achieve the complete structural coverage, complementary unit tests are performed, or this approach is completed with e.g. the use of “abstract interpretation” tools, run in a systematic way, including the subsequent correction of the potential errors.
This approach intends to achieve unit testing objectives of a part of the units by functional tests instead of classical “unit tests”. This is equivalent to performing test of a unit, not stubbing the external units, but using the real components.
Nevertheless, tricky behaviours are easier to test out of the functional context and typically, unit tests remain particularly efficient to check robustness code or error cases, which are difficult to exercise through functional tests. Unit tests are also performed for units having high measured complexity metrics.
A second example is, in order to limit the risk of discovering faults during the validation, to improve the maturity of the source code by early source code peer reviews.
As a third example, it is also recommended to use powerful simulators during validation, including high level investigation capabilities at software implementation level.
Another example consists in applying the tests on a group of software units. The principle is to perform unit testing with regard to the software design elements (i.e. components or set of components) on the produced software units or a set of software units. Unit test cases are not systematically defined at each software unit level. This optimization level is generally reached combining the functional view of the TS and the design view of the SDD. The functional view helps defining test scenarios and the design view helps defining the set of units to be tested together. Both views (functional and design) also allow covering integration tests objectives (see next chapter) by checking that information exchanged between the design items set up through the functional tests is correct. The 'function' defined in this testing approach is often close to functions defined in the SRS, e.g. for an on-board central software a PUS service, an equipment management, an AOCS attitude estimation function or a thermal regulation loop.
With this approach, the external interfaces of the function are tested in order to ease the further integration with the rest of the software. The internal operations of the function are implicitly tested during the functional tests contributing to the integration tests objectives.