API Testing
Stoobly API Testing CLI - Questions & Answers
The API testing CLI enables you to validate individual requests and entire scenarios by replaying them and comparing responses against expected outcomes. These testing strategies help ensure API reliability and catch regressions.
Testing Requests
Q: How do I test a request to validate its response?
A: Use request test with the request key to replay and validate the response.
Example:
stoobly-agent request test "<REQUEST-KEY>"Q: What test strategies are available?
A: Four strategies: diff (exact match), contract (schema validation), fuzzy (flexible match), and custom (custom logic).
Example:
# Diff testing (exact comparison, default)
stoobly-agent request test "<REQUEST-KEY>" --strategy diff
# Contract testing (schema validation)
stoobly-agent request test "<REQUEST-KEY>" --strategy contract
# Fuzzy testing (allows minor variations)
stoobly-agent request test "<REQUEST-KEY>" --strategy fuzzy
# Custom testing (use lifecycle hooks)
stoobly-agent request test "<REQUEST-KEY>" --strategy custom --lifecycle-hooks-path ./test-hooks.pyQ: How do I continue testing even if a test fails?
A: Use the --aggregate-failures flag to continue execution on failure.
Example:
Q: How do I control which test results are displayed?
A: Use the --output-level option to filter results.
Example:
Q: How do I filter which properties are tested?
A: Use the --filter option to selectively test properties.
Example:
Q: How do I test a request with mock dependencies?
A: Use --response-fixtures-path or --public-directory-path to provide mock responses for dependencies.
Example:
Q: How do I test a request and save the results?
A: Use the --save flag to persist test results (remote features).
Example:
Q: How do I save test results to a specific report?
A: Use the --report-key option to add results to a report (remote features).
Example:
Q: How do I test with assigned and validated aliases?
A: Combine --assign and --validate options for comprehensive testing.
Example:
Q: How do I batch test multiple requests?
A: Use a script to test multiple requests and collect results.
Example:
Q: How do I use request commands in a CI/CD pipeline?
A: Test recorded requests as part of your automated test suite.
Example:
Q: How do I integrate request testing with monitoring?
A: Periodically replay and test requests to monitor API health.
Example:
Testing Scenarios
Q: How do I test a scenario to validate responses?
A: Use scenario test with the scenario key to replay and validate all requests.
Example:
Q: What test strategies are available for scenarios?
A: Four strategies: diff (exact match), contract (schema validation), fuzzy (flexible match), and custom (custom logic).
Example:
Q: How do I continue testing even if a request fails?
A: Use the --aggregate-failures flag to continue execution on failure.
Example:
Q: How do I control which test results are displayed?
A: Use the --output-level option to filter results.
Example:
Q: How do I filter which properties are tested?
A: Use the --filter option to selectively test properties.
Example:
Q: How do I test a scenario with mock dependencies?
A: Use --response-fixtures-path or --public-directory-path to provide mock responses.
Example:
Q: How do I save test results to a report?
A: Use the --report-key option to add results to a report (remote features).
Example:
Q: How do I save test results?
A: Use the --save flag to persist test results (remote features).
Example:
Environment-Specific Testing
Q: How do I run the same scenario test against different environments?
A: Use the --host option to target different environments.
Example:
Q: How do I create environment-specific test scenarios?
A: Create separate scenarios for each environment or use the same scenario with different hosts.
Example:
CI/CD Integration
Q: How do I use scenario tests in CI/CD pipelines?
A: Test scenarios as part of your automated test suite.
Example:
Q: How do I generate test reports from scenario tests?
A: Use JSON format and process the output.
Example:
Advanced Testing Operations
Q: How do I chain multiple test scenarios?
A: Use a script to execute scenarios in sequence with dependency handling.
Example:
Q: How do I conditionally execute test scenarios?
A: Use scripts with conditional logic based on scenario results.
Example:
Monitoring and Debugging
Q: How do I debug a failing scenario test?
A: Increase log level and use verbose output.
Example:
Q: How do I identify which request in a scenario test is failing?
A: Use detailed output and logging to track request execution.
Example:
Q: How do I monitor scenario test health over time?
A: Set up periodic scenario testing with result tracking.
Example:
Last updated
Was this helpful?