Stoobly Docs
  • Introduction
  • Use Cases
    • Generate Mock APIs
      • Empower Development
      • Scale API Testing
    • Enable E2E Testing
  • FAQ
    • Recording
    • Mocking
    • Terminology
  • Getting Started
    • Installing the Agent
      • Installation with pipx
      • Installation with Docker
  • Core Concepts
    • Agent
      • Intercept Modes
        • Mocking
        • Recording
      • Lifecycle Hooks
      • Proxy Settings
        • Data Rules
        • Firewall Rules
        • Rewrite Rules
        • Match Rules
    • Context
    • Mock API
      • Request
        • Response
        • Replay History
      • Scenarios
      • Snapshots
      • Fixtures
      • Public Folder
    • Scaffold
      • Service
      • Validation
      • Workflow
  • Guides
    • How to Run the Agent
      • Run with CLI
      • Run with Docker
    • How to Configure the Agent
      • Forward Proxy
        • Enable HTTPS Traffic
      • Reverse Proxy
    • How to Record Requests
      • Recording from the UI
      • Recording from the CLI
      • How to Create Contexts
      • How to Create Scenarios
        • Creating from the UI
        • Creating from the CLI
      • How to Create Requests
      • How to Customize Recordings
        • Customizing with Lifecycle Hooks
    • How to Update Requests
      • Editing from the UI
      • Editing with Snapshots
      • How to Update Scenarios
        • Updating from the UI
        • Updating from the CLI
      • Updating with Replay
        • Replaying from the UI
        • Replaying from the CLI
        • How to Customize Replays
          • Customizing with Lifecycle Hooks
      • Updating with Open API
    • How to Mock APIs
      • How to Enable Mocking
        • Enabling from the UI
        • Enabling from the CLI
      • How to Snapshot Requests
        • Deleting Snapshots
        • Sharing Snapshots
      • How to Use Fixtures
      • How to Customize Mocking
        • Customizing with Lifecycle Hooks
        • Customizing with Request Headers
      • Troubleshooting
    • How to Replay Requests
      • Replay with the UI
      • Replay with the CLI
    • How to Integrate E2E Testing
      • How to Scaffold an App
        • Scaffolding a Service
        • Customizing a Workflow
          • Customizing Container Services
          • Customizing Lifecycle Hooks
          • Customizing Init Scripts
          • Customizing Configure Scripts
          • Customizing Makefile
        • Troubleshooting
      • How to Run a Workflow
        • Running with CLI command
        • Running with Make
        • Troubleshooting
          • Validating
      • How to Stop a Workflow
        • Stopping with CLI command
        • Stopping with Make
      • How to Update a Scaffold
        • Deleting a Service
      • FAQ
  • Developer Guide
    • Installation from Source
    • Submitting Change Requests
    • Releases
  • Experimental
    • Experimental Features
      • Aliases
      • Endpoints
      • API Testing
        • Getting Started
        • Configuration
          • Assign
          • Lifecycle Hooks
          • Trace
      • Optional Components
Powered by GitBook
On this page
  • Scale Test Development
  • Creates a user
  • Alias Tagging
  • Schema Definitions
  • Scale Test Maintenance
  • Creates a user
  • Integrating Tests

Was this helpful?

  1. Use Cases
  2. Generate Mock APIs

Scale API Testing

PreviousEmpower DevelopmentNextEnable E2E Testing

Last updated 3 months ago

Was this helpful?

The following requires

In order to scale API testing, we want to ensure that:

  • Tests can be developed quickly

  • Minimize maintenance headaches

  • Integrate seamlessly into CI & CD pipelines

Scale Test Development

When testing for functionality, we want to address the following concerns:

  • When sending a single request, do we obtain an expected response?

  • When sending requests in sequence, do we still obtain expected responses?

  • Does our API enforce an accepted list of parameters?

As we write tests to satisfy the above goals, they generally follow the below pattern:

  • Send a request with specific parameters

  • Validate response

  • For each property in the response, check each is expected

The following example illustrates the above pattern.

Given the following endpoint schema:

Creates a user

POST /users

Request Body

Name
Type
Description

first_name

String

last_name

String

age

Number

country

String

{
  first_name: String,
  last_name: String,
  age: Number,
  country: String,
}

A simple test could be as follows:

require 'rails_helper'

RSpec.describe "Users", type: :request do
  describe "POST /users" do
    it "creates successfully" do
      # Send a request with specific parameters
      post "/users", {
        first_name: 'John',
        last_name: 'Smith',
        age: 25,
        country: 'UK',
      }
      
      expect(response.status).to eq(200)
      
      user = JSON.parse(response.body)
      
      # For each property in the response, check that it is expected
      expect(user['first_name']).to eq('John')
      expect(user['last_name']).to eq('Smith')
      expect(user['age']).to eq(25)
      expect(user['country']).to eq('UK')
    end
  end
end

Given specific request parameters, we expect a specific response. Seems pretty minimal right? However, there are some critiques:

  • The inputs are tightly coupled to the expectations in the form of a contract, i.e. changing an will cause the corresponding expectation to fail

  • Each property in the response becomes an expectation

  • Imagine having to integrate this request with a GET request for the resource details

  • When we think about creating test variations (different charsets, missing fields, empty fields, etc...) we can see how out-of-control this quickly becomes

What if instead of hard-coding inputs and expected outputs, we record them instead?

Given a driver, e.g. an user interface, Stoobly intercepts and records incoming requests. The received request response becomes the expected test results. A key difference here is that instead of manually specifying all the properties that should match, we use the entire response as the expectation. Now this leads to two potentially problematic scenarios:

  1. When a property within the response depends on the value of a previous request

  2. When a property within a response is not deterministic (e.g. timestamps)

Alias Tagging

To address the first problem listed above, we asked ourselves whether we can somehow save the values of a previous request to some variable. The alias feature coming soon captures this very idea. Stoobly provides support for tagging parts of a request such that tagged properties with the same alias name in a successive request will be replaced with values from a previous request. To provide finer grain control on how values are replaced, we also provide various alias resolve strategies.

Schema Definitions

To address the second problem listed above, we provide dynamically generated endpoint schemas. When an endpoint is created for a request, Stoobly builds a schema definition based on the request parameters and response properties. A request belongs to an endpoint, any schema rules applied to the endpoint gets applied to a request during testing. When a property within a request is marked as not deterministic, it gets skipped.

Scale Test Maintenance

The following trigger a need for tests to be updated:

  • Request parameter changes mean test inputs have to be updated

  • Response schema changes mean test expectations have to be updated

For example, below is an updated endpoint schema where we change the casing first_name and last_name to firstName and lastName respectively:

Creates a user

POST /users

Request Body

Name
Type
Description

firstName*

String

lastName*

String

age*

Number

country*

String

{
  firstName: String,
  lastName: String,
  age: Number,
  country: String,
}

This would then mean that every test where this endpoint is used, needs to be updated. That is, the cost of maintaining tests scales linearly with the number of tests written. The following challenges arise with modifying an endpoint schema:

  • Determining which tests need updating

  • Updating the test request parameters and expectations

What if we could use the updated endpoint schema to help pinpoint and update tests that have no longer fulfill the new contract? One significant advantage of typing is so that IDE's can provide static analysis; let Stoobly provide something similar for testing.

To address the challenges with modifying an endpoint schema, Stoobly provides:

  • Contract testing to pinpoint which requests no longer adhere to the endpoint schema

  • Replay and recording requests with an up-to-date response expectation

Integrating Tests

To ensure a seamless integration into CI/CD pipelines Stoobly will soon provide the following features:

  • Bash CLI to run tests with an exit code of 1 to denote failure

  • Configurable JSON output format

  • A test result report accessible in the web browser for each run

For power users, we also support lifecycle hooks for fine-grained control on how tests are run. For more information on lifecycle hooks, learn more here:

experimental features
Lifecycle Hooks