Scale API Testing

The following requires experimental features

In order to scale API testing, we want to ensure that:

  • Tests can be developed quickly

  • Minimize maintenance headaches

  • Integrate seamlessly into CI & CD pipelines

Scale Test Development

When testing for functionality, we want to address the following concerns:

  • When sending a single request, do we obtain an expected response?

  • When sending requests in sequence, do we still obtain expected responses?

  • Does our API enforce an accepted list of parameters?

As we write tests to satisfy the above goals, they generally follow the below pattern:

  • Send a request with specific parameters

  • Validate response

  • For each property in the response, check each is expected

The following example illustrates the above pattern.

Given the following endpoint schema:

Creates a user

POST /users

Request Body

{
  first_name: String,
  last_name: String,
  age: Number,
  country: String,
}

A simple test could be as follows:

require 'rails_helper'

RSpec.describe "Users", type: :request do
  describe "POST /users" do
    it "creates successfully" do
      # Send a request with specific parameters
      post "/users", {
        first_name: 'John',
        last_name: 'Smith',
        age: 25,
        country: 'UK',
      }
      
      expect(response.status).to eq(200)
      
      user = JSON.parse(response.body)
      
      # For each property in the response, check that it is expected
      expect(user['first_name']).to eq('John')
      expect(user['last_name']).to eq('Smith')
      expect(user['age']).to eq(25)
      expect(user['country']).to eq('UK')
    end
  end
end

Given specific request parameters, we expect a specific response. Seems pretty minimal right? However, there are some critiques:

  • The inputs are tightly coupled to the expectations in the form of a contract, i.e. changing an will cause the corresponding expectation to fail

  • Each property in the response becomes an expectation

  • Imagine having to integrate this request with a GET request for the resource details

  • When we think about creating test variations (different charsets, missing fields, empty fields, etc...) we can see how out-of-control this quickly becomes

What if instead of hard-coding inputs and expected outputs, we record them instead?

Given a driver, e.g. an user interface, Stoobly intercepts and records incoming requests. The received request response becomes the expected test results. A key difference here is that instead of manually specifying all the properties that should match, we use the entire response as the expectation. Now this leads to two potentially problematic scenarios:

  1. When a property within the response depends on the value of a previous request

  2. When a property within a response is not deterministic (e.g. timestamps)

Alias Tagging

To address the first problem listed above, we asked ourselves whether we can somehow save the values of a previous request to some variable. The alias feature coming soon captures this very idea. Stoobly provides support for tagging parts of a request such that tagged properties with the same alias name in a successive request will be replaced with values from a previous request. To provide finer grain control on how values are replaced, we also provide various alias resolve strategies.

Schema Definitions

To address the second problem listed above, we provide dynamically generated endpoint schemas. When an endpoint is created for a request, Stoobly builds a schema definition based on the request parameters and response properties. A request belongs to an endpoint, any schema rules applied to the endpoint gets applied to a request during testing. When a property within a request is marked as not deterministic, it gets skipped.

Scale Test Maintenance

The following trigger a need for tests to be updated:

  • Request parameter changes mean test inputs have to be updated

  • Response schema changes mean test expectations have to be updated

For example, below is an updated endpoint schema where we change the casing first_name and last_name to firstName and lastName respectively:

Creates a user

POST /users

Request Body

{
  firstName: String,
  lastName: String,
  age: Number,
  country: String,
}

This would then mean that every test where this endpoint is used, needs to be updated. That is, the cost of maintaining tests scales linearly with the number of tests written. The following challenges arise with modifying an endpoint schema:

  • Determining which tests need updating

  • Updating the test request parameters and expectations

What if we could use the updated endpoint schema to help pinpoint and update tests that have no longer fulfill the new contract? One significant advantage of typing is so that IDE's can provide static analysis; let Stoobly provide something similar for testing.

To address the challenges with modifying an endpoint schema, Stoobly provides:

  • Contract testing to pinpoint which requests no longer adhere to the endpoint schema

  • Replay and recording requests with an up-to-date response expectation

Integrating Tests

To ensure a seamless integration into CI/CD pipelines Stoobly will soon provide the following features:

  • Bash CLI to run tests with an exit code of 1 to denote failure

  • Configurable JSON output format

  • A test result report accessible in the web browser for each run

For power users, we also support lifecycle hooks for fine-grained control on how tests are run. For more information on lifecycle hooks, learn more here:

Last updated