Test Types

This part of Iterative Solution Testing describes the types of tests available for validating Solutions.  Not all Test Types are appropriate for all Solution Types.  Each Solution should identify the types appropriate for use in their Testing Strategy.

After viewing this page, check out the additional detail for each Test Type.

On This Page
    Add a header to begin generating the table of contents

    Divide and Conquer: Test Types

    There are simply too many possible permutations for any Solution to be validated by a single round of testing.  And there is never sufficient time to execute all desired tests. So how can trust in a Solution be built? How can it be proven valid?

    To manage the complexity inherent in large, complex Solutions testing objectives are divided into multiple Test Types. Each type has its own specific purpose and objectives. Each type is then divided into a set of actions for a given scope of testing. As testing efforts progress, each type involves progressively more complex situational tests.  However, this model only works when earlier Test Types establish a foundation, or level of trust, upon which latter types can build.  If the foundation is not valid, then nothing built upon it is valid either.  By following the progression of Test Types, and ensuring that each type provides a sound, solid validation of its objective, then confidence in the Solution is built over time and over specific parts.

    All Test Types are built using the basic Test Components  described in the previous section.  As with Test Components, Test Types offer a lot of room for reuse.  Doing so is important because there will not be sufficient time to (re)create all tests needed to validate incremental functionality.  Again, as with Test Components, there are two flavors of reuse:

    1. When functionality is extended, then update relevant, existing tests to include the new functionality rather than creating new tests.  Try to maintain a close traceability between tests and the functionality they seek to cover (i.e., validate).  The more tests there are for a Solution the more difficult management and traceability becomes.  But there needs to be a sufficient number of tests to validate the scope of Solution functionality.  There is a balance to be found between having enough tests and ensuring confidence in the test results.  It's better to have fewer, more comprehensive test, than many narrow tests.
    2. Clone and then modify an existing Test Type to create a new Test Type.  For example, take a Functional Test used successfully, copy it, and modify it to become an Integration Test covering the same functionality.  Copy the Integration Test to create a System Test, and so on.  Trying to mix & match tests of varying types is not a good idea.  It's quicker, easier and less expensive to clone and modify existing tests rather than trying manage the variances of each test objective within a single test.

    The table summarizes common types of testing conduct throughout Iterative Solution Implementation.  It does not depict all of the Test Types listed above.

    UnitFunctionalIntegrationSystemRegressionUser Acceptance
    Test Type IDUNTFNCINTSYSREGUAT
    Test Purpose:
    To validate that...
    individual pieces of a RICE* Object work as specified.one or more Objects and related Configuration enable a given Function to operate as designed.series’ of Functions allow data flow to properly enabling a process, or portion thereof, to operate as designed.all 3 primary System Components** work together to process transactions properly.system area(s) previously tested still function properly following some unrelated change.all issues raised during any prior test cycles have been addressed in a manner acceptable to the Customer.
    Test Objectives:
    To ensure...
    he developed Object meets given functional and technical specifications.Configuration is correct;
    Any add / mod Object(s) support given Function;
    Function properly supports User Story and related AC.
    Configuration & Object(s) support functionality and AC using:

    Various cycles of transactions;
    Data flow within / across Functional Areas
    the successful merge of:
    a) FNC & INT tested Objects;

    b) CNV data + Prod-level Interfaces; and

    c) Production configuration.
    re-executed tests of functionality unrelated to that which is undergoing FNC, INT & SYS tests still pass.business awareness and sign-off to introduce new or modified Features to Production.
    Test Scope:Small system element
    such as a field on form or piece of code
    Find config errors / weakness in each single Function (e.g. ‘Create Account’, ‘Convert Lead’)Assess “Controlled” (i.e. valid & invalid) transactional data’s flow into, through & out of a series of Functions.Assess “Uncontrolled” (i.e. Production-like) transactional data’s flow through the system.Subset of INT Tests
    of core functionality
    previously SYS tested.
    No new scenarios or tests to be executed. Re-run SYS tests to show resolution ofto prior problems.
    Test DataEmbeddedAd hocDesignedUnfilteredDesignedTailored
    Negative Test %95%75% - 85%60% - 70%N/AN/AN/A
    Test Case Source:Developer coded JUnitManual entry by Delivery Team Tester (and/or others)Clone of FNC TestClone of INT TestClone of INT Test
    Test Scenario Source:N/AManual entry by Delivery Team's Test Analyst (and/or others)Manual entry by Delivery Team's Test AnalystQA Group clone &
    extend INT Tests
    QA Group clone &
    extend INT Tests

    *RICE Objects = Reports; Interfaces; Conversions; Extensions; any objects that are not Vendor delivered.

    **Primary System Components = 1) Configuration Data; 2) RICE Objects; and 3) Operational Data

    After reviewing the above Test Types, next look to guidance on Test Development & Execution.

    Scroll to Top