firstAI / .github /prompts /pro-tester.md
ndc8
๐Ÿš€ Add multimodal AI capabilities with image-text-to-text pipeline
4e10023

๐Ÿงช AI Model Testing Instructions Guide

Overview

This guide provides clear, actionable instructions for AI models to perform thorough testing based on user requirements. It includes systematic approaches, checklists, and verification procedures to ensure complete test coverage and high-quality results.


Table of Contents

  1. Pre-Testing Phase
  2. Test Planning
  3. Test Execution
  4. Test Coverage Verification
  5. Post-Testing Phase
  6. Checklists and Checkpoints
  7. Best Practices for AI Models
  8. Conclusion


1. Pre-Testing Phase

1.1 Requirement Analysis

Objective: Understand and document all testing requirements

Checklist

  • Parse user instructions completely
  • Identify all explicit requirements
  • Identify implicit requirements
  • Document edge cases mentioned
  • List all systems/components to be tested
  • Define success criteria
  • Identify any constraints or limitations

Key Questions

  • What is the primary objective of testing?
  • What are the acceptance criteria?
  • What are the expected inputs and outputs?
  • Are there any specific scenarios to focus on?
  • What level of testing is required (unit, integration, system, etc.)?

1.2 Scope Definition

Objective: Clearly define what will and won't be tested

Checklist

  • Define testing boundaries
  • List in-scope functionalities
  • List out-of-scope items
  • Identify dependencies
  • Document assumptions
  • Identify test environment requirements

2. Test Planning

2.1 Test Strategy Development

Objective: Create a comprehensive testing approach

Checklist

  • Choose appropriate testing methodologies
  • Define test types to be performed:
    • Functional testing
    • Non-functional testing
    • Performance testing
    • Security testing
    • Usability testing
    • Compatibility testing
  • Prioritize test scenarios
  • Estimate testing effort
  • Define test data requirements

2.2 Test Case Design

Objective: Create detailed test cases covering all scenarios

Test Case Categories

  1. Positive Test Cases

    • Valid inputs
    • Normal flow scenarios
    • Expected behavior verification
  2. Negative Test Cases

    • Invalid inputs
    • Error conditions
    • Exception handling
  3. Edge Cases

    • Boundary values
    • Extreme conditions
    • Corner cases
  4. Integration Test Cases

    • Component interactions
    • Data flow between modules
    • API integrations

Test Case Template

Test Case ID: TC_XXX Test Case Name: [Descriptive name] Objective: [What is being tested] Pre-conditions: [Setup requirements] Test Steps: [Step-by-step procedure] Expected Results: [What should happen] Actual Results: [What actually happened] Status: [Pass/Fail/Blocked] Comments: [Additional notes]


3. Test Execution

3.1 Test Execution Process

Objective: Execute tests systematically and document results

Execution Checklist

  • Verify test environment setup
  • Execute test cases in planned sequence
  • Document actual results for each test
  • Capture evidence (screenshots, logs, etc.)
  • Record defects with proper classification
  • Update test case status
  • Track test execution progress

Test Result Categories

  • Pass: Test executed successfully, meets expected results
  • Fail: Test failed, does not meet expected results
  • Blocked: Test cannot be executed due to dependencies
  • Skip: Test intentionally not executed

3.2 Defect Management

Objective: Properly identify, document, and track defects

Defect Report Template

Defect ID: DEF_XXX Summary: [Brief description] Description: [Detailed explanation] Severity: [Critical/High/Medium/Low] Priority: [High/Medium/Low] Steps to Reproduce: [Detailed steps] Expected Result: [What should happen] Actual Result: [What actually happened] Environment: [Test environment details] Status: [Open/In Progress/Resolved/Closed]


4. Test Coverage Verification

4.1 Coverage Analysis

Objective: Ensure all requirements and scenarios are tested

Coverage Verification Checklist

  • Requirement Coverage

    • All functional requirements tested
    • All non-functional requirements tested
    • All user stories covered
    • All acceptance criteria verified
  • Code Coverage (if applicable)

    • Statement coverage
    • Branch coverage
    • Path coverage
    • Function coverage
  • Scenario Coverage

    • All positive scenarios tested
    • All negative scenarios tested
    • All edge cases covered
    • All integration points tested
  • Data Coverage

    • Valid data sets tested
    • Invalid data sets tested
    • Boundary data tested
    • Special characters tested

4.2 Gap Analysis

Objective: Identify and address any testing gaps

Gap Analysis Process

  1. Identify Gaps

    • Compare test cases against requirements
    • Check for untested scenarios
    • Identify missing test data
    • Review uncovered code paths
  2. Address Gaps

    • Create additional test cases
    • Execute missing tests
    • Update test documentation
    • Verify gap closure

5. Post-Testing Phase

5.1 Test Summary and Reporting

Objective: Provide comprehensive test results and recommendations

Test Summary Report Template

Test Summary Report

Test Overview

Testing Period: [Start Date - End Date] Total Test Cases: [Number] Test Cases Executed: [Number] Test Cases Passed: [Number] Test Cases Failed: [Number] Test Cases Blocked: [Number] Coverage Summary

Requirement Coverage: [Percentage] Code Coverage: [Percentage] Scenario Coverage: [Percentage] Defect Summary

Total Defects Found: [Number] Critical Defects: [Number] High Priority Defects: [Number] Medium Priority Defects: [Number] Low Priority Defects: [Number] Test Results Analysis

[Detailed analysis of results] Risks and Issues

[List of identified risks] Recommendations

[Suggestions for improvement] Sign-off Criteria

[Criteria for test completion]

5.2 Final Verification

Objective: Ensure all testing objectives are met

Final Verification Checklist

  • All planned test cases executed
  • All critical defects resolved
  • Test coverage meets requirements
  • All acceptance criteria verified
  • Test documentation complete
  • Stakeholder sign-off obtained

6. Checklists and Checkpoints

6.1 Comprehensive Testing Checklist

Phase 1: Planning

  • Requirements analyzed and documented
  • Test scope defined
  • Test strategy developed
  • Test cases designed and reviewed
  • Test environment prepared
  • Test data prepared

Phase 2: Execution

  • Test cases executed systematically
  • Results documented accurately
  • Defects logged and tracked
  • Test coverage monitored
  • Issues escalated when needed

Phase 3: Verification

  • All test cases executed
  • Coverage analysis completed
  • Gap analysis performed
  • Defects reviewed and prioritized
  • Retesting completed for fixes

Phase 4: Closure

  • Test summary report prepared
  • Lessons learned documented
  • Test artifacts archived
  • Sign-off obtained
  • Recommendations provided

6.2 Quality Gates

Gate 1: Test Planning Complete

  • All requirements have corresponding test cases
  • Test cases reviewed and approved
  • Test environment ready
  • Test data available

Gate 2: Test Execution Complete

  • All planned test cases executed
  • All results documented
  • Critical defects addressed
  • Coverage targets met

Gate 3: Test Closure

  • All exit criteria met
  • Test summary report approved
  • All artifacts delivered
  • Stakeholder acceptance obtained

7. Best Practices for AI Models

7.1 Systematic Approach

  • Follow the phases in order
  • Don't skip steps
  • Document everything
  • Maintain traceability

7.2 Thoroughness

  • Test all scenarios, not just happy paths
  • Consider edge cases and error conditions
  • Verify both positive and negative cases
  • Test with various data sets

7.3 Verification and Validation

  • Verify test cases against requirements
  • Validate actual results against expected results
  • Cross-check test coverage
  • Review and update test cases as needed

7.4 Communication

  • Provide clear, detailed reports
  • Highlight risks and issues
  • Make recommendations
  • Ensure stakeholder understanding

7.5 Continuous Improvement

  • Learn from each testing cycle
  • Update test cases based on findings
  • Improve test coverage over time
  • Refine testing processes

8. Conclusion

This guide provides a comprehensive framework for AI models to perform thorough testing. By following these instructions, checklists, and verification procedures, AI models can ensure complete test coverage and deliver high-quality testing results that meet user requirements.

Remember: The key to successful testing is not just executing tests, but ensuring that all aspects of the system are thoroughly examined and that all requirements are satisfied.