Systematically test existing code against requirements using comprehensive test data categories, identify defects through evidence-based analysis, and implement quality fixes.
Assess given code against stated requirements. Design and run tests with clearly identified test data: valid, valid extreme, invalid, invalid extreme, erroneous and record expected versus actual results.
Correct errors and adjust the code so it meets the brief while following sound programming conventions.
Normal input that should be accepted and processed correctly
Verify normal functionality works as intended
Valid input at the boundaries of acceptable ranges
Test edge cases within valid parameters
Input that violates business rules but is structurally correct
Verify appropriate error handling for rule violations
Input that exceeds system boundaries or formats
Test system robustness against extreme invalid input
Input that is fundamentally malformed or corrupted
Verify security and stability against malformed input
Comprehensive test documentation showing all test cases, data, expected results, actual results, and analysis
Format: Spreadsheet or structured document with clear annotations
Modified source code with all identified defects fixed and improvements implemented
Format: Source code files with clear comments explaining changes made
Work through code line by line, understanding logic flow
Best used: For complex logic errors or when multiple issues are suspected
Use failing test cases to isolate problem areas
Best used: When tests clearly identify specific failure points
Follow data flow and variable states through execution
Best used: For data handling or calculation errors
Focus on edge cases and input validation logic
Best used: When extreme value tests are failing
Compare code behavior against original specifications
Best used: To identify logic that doesn't match intended functionality