Testing and Validation

Before declaring victory, you need to know: Does it actually work? Testing and validation answer this question through systematic verification of your improvements.


Verification vs. Validation

These terms are often confused but mean different things:

Concept Question Focus
Verification Are we building it right? Did we follow the design?
Validation Did we build the right thing? Does it solve the problem?

Both are necessary:

  • Verification catches errors in execution
  • Validation catches errors in understanding

You can perfectly implement the wrong solution (passes verification, fails validation) or imperfectly implement the right solution (fails verification, might still meet needs).


Types of Testing

Functional Testing

Does each feature work as specified?

Approach:

  • Test each requirement independently
  • Verify expected outputs for given inputs
  • Check boundary conditions
  • Test error handling

Example:

Requirement Test Case Expected Result
Orders over $5K need approval Submit $4,999 order Auto-approved
Orders over $5K need approval Submit $5,001 order Routed to manager
Orders over $5K need approval Submit $5,000 exactly ??? (boundary—clarify requirement)

Integration Testing

Do the parts work together?

Areas to test:

  • Data flows between systems
  • Handoffs between process steps
  • Timing and sequencing
  • Error handling across boundaries

User Acceptance Testing (UAT)

Do real users find it acceptable?

Key aspects:

  • Actual users test (not developers or analysts)
  • Real-world scenarios
  • Production-like environment
  • Final checkpoint before go-live

UAT success criteria should be:

  • Defined before testing begins
  • Agreed upon by stakeholders
  • Measurable and objective

Performance Testing

Does it work under real-world conditions?

Test Type Question Example
Load testing Can it handle expected volume? Process 1,000 orders/hour
Stress testing What happens at breaking point? What if volume doubles?
Endurance testing Does it work over time? Run for 48 hours continuously
Spike testing Can it handle sudden surges? Black Friday traffic

Regression Testing

Did improvements break something else?

When you change a process, other things might unexpectedly stop working. Regression testing verifies that existing functionality still works after changes.


Testing Approaches for Process Improvement

Parallel Operations

Run old and new processes simultaneously:

When to use:

  • High-risk changes
  • Critical processes
  • Need confidence before cutover

Considerations:

  • Doubles workload temporarily
  • Need clear criteria for comparison
  • Must handle legitimate differences

Pilot Testing

Test with a subset before full rollout:

Pilot selection considerations:

  • Representative of full population
  • Willing participants
  • Ability to provide feedback
  • Manageable scope

Pilot success criteria:

  • Define what "works" means before starting
  • Include both functional and adoption metrics
  • Set clear decision points

Simulation

Test without affecting real operations:

"Simulation enables data-driven decision-making about which process modifications warrant investment and implementation."

Simulation benefits:

  • No risk to actual operations
  • Can test extreme scenarios
  • Repeatable experiments
  • Multiple configurations compared

Simulation limitations:

  • Only as good as the model
  • May miss real-world factors
  • Requires expertise to build and interpret

Creating Test Cases

From Requirements to Tests

Every requirement should have at least one test case:

Test Case Template

Element Description
Test ID Unique identifier
Requirement What requirement is being tested
Description What is being tested
Preconditions Setup needed before test
Steps Specific actions to take
Expected result What should happen
Actual result What did happen
Status Pass, Fail, Blocked
Notes Additional observations

Example Test Case

Test ID: TC-042
Requirement: REQ-007 (Auto-approve orders under $1,000)

Description: Verify orders under threshold are automatically approved

Preconditions:
- Test user has order entry access
- No holds on test customer account
- Test product in stock

Steps:
1. Log in as test user
2. Create new order for test customer
3. Add product with total $999
4. Submit order
5. Check order status

Expected Result: Order status shows "Approved" without manual intervention

Actual Result: [To be completed during testing]
Status: [Pass/Fail]

Test Coverage

Ensure you test thoroughly:

Coverage Type What It Means
Requirements coverage Every requirement has tests
Happy path Normal scenarios work
Error paths Errors handled gracefully
Boundary conditions Edge cases covered
Integration points Connections work
Performance Meets speed/volume needs

Acceptance Criteria

Clear acceptance criteria prevent arguments about whether testing passed.

Good Acceptance Criteria

Specific: "Response time under 2 seconds" not "fast enough"

Measurable: "Error rate below 1%" not "low errors"

Achievable: Based on realistic expectations

Relevant: Connected to actual business needs

Complete: Covers all important aspects

Example Acceptance Criteria

For an improved order processing system:

Category Criterion Threshold
Functionality All requirements tested and passing 100%
Defects No critical or high-severity defects 0
Defects Medium defects with workarounds ≤ 5
Performance Average processing time < 30 seconds
Performance Peak capacity 500 orders/hour
User acceptance UAT sign-off All key users
Documentation User guide complete Yes
Training All users trained 100%

Handling Test Results

When Tests Pass

  • Document the results
  • Proceed to next phase or go-live
  • Archive test evidence
  • Celebrate (briefly)

When Tests Fail

Failure analysis questions:

  • Is the test valid?
  • Is the requirement correct?
  • Is the design appropriate?
  • Was implementation faulty?
  • Is the environment set up correctly?

Defect Classification

Severity Definition Example Response
Critical System unusable Crashes on login Stop; fix immediately
High Major function broken Can't submit orders Fix before go-live
Medium Feature impaired Workaround exists Plan fix; go-live may proceed
Low Minor issue Cosmetic defect Fix when convenient

Validation in Process Improvement

Beyond system testing, validate that the improved process actually achieves its goals.

Process Validation Questions

  1. Does it solve the original problem?

    • Compare before and after metrics
    • Verify root causes are addressed
  2. Does it work in practice?

    • Observe actual operations
    • Gather user feedback
  3. Are stakeholders satisfied?

    • Customer/user satisfaction
    • Operator feedback
    • Management confidence
  4. Is it sustainable?

    • Can it be maintained?
    • Are controls in place?
    • Is documentation adequate?

Validation Example

Original problem: Claims processing takes 14 days average

Improvement goal: Reduce to 5 days

Validation:

  • Measured post-implementation average: 4.3 days ✓
  • Tracked for 4 weeks to confirm consistency ✓
  • Surveyed processors: 87% positive feedback ✓
  • Measured customer complaints: down 45% ✓
  • Reviewed with management: approved for full adoption ✓

Testing Checklist

Before declaring testing complete:

  • All requirements have test cases
  • Functional tests executed and passed
  • Integration points tested
  • User acceptance testing complete
  • Performance meets criteria
  • Regression testing shows no breaks
  • All critical/high defects resolved
  • Acceptance criteria met
  • Sign-offs obtained
  • Test results documented
  • Lessons learned captured