Enable E2E Testing

When E2E testing, we want to address the following issues:

  • How do we minimize flaky tests?

  • How do we ensure tests complete within a reasonable amount time?

Minimizing Flakiness

To minimize flaky tests, we should use mocks in place of calling live services. Examples of live services include authentication, internal API's, and external APIs such as Stripe for payments or Twilio for SMS messages. While depending on live services has the following advantages:

  1. Maintenance and updates by a dedicated team

  2. Data created by maintainers

It also incurs the following disadvantages:

  1. Being down unexpectedly

  2. Inconsistent responses due to updates to a shared service

  3. Having long response latencies

These disadvantages are what make E2E tests flaky. To help address the flakiness, we can record requests to create mock APIs. The following provides an overview of how Stoobly can help:

  1. Run the E2E test to trigger sending requests

  2. Stoobly will intercept the request and record it

  3. Configure Stoobly to mock instead of record requests

  4. Run E2E tests, API tests, or UI tests

But using in mocks in place of live services may sound counter-intuitive when it comes to E2E testing. After all, isn't the point of E2E testing to validate real user flows? This is true in the case where mocks are consumer generated. Consumer generated mocks have a tendency to not represent real data. That is, consumers likely do not have the same understanding of API responses as the maintainers of the system that produces it. Furthermore, mocks may become out of date as request contracts change. In Stoobly's case here, mocks are system generated and stored for consumer testing at a later point.

Minimizing Run Time

Latency is affected by the following factors:

  1. Distance between client and server

  2. Network congestion

  3. Number of concurrent clients talking to the server

  4. Server processing time

By replacing the live service with a mock service running locally, we can minimize the impact of all the above factors.

Distance between client and server

When mocking, the distance traveled for a request is from the client to agent. By default, the agent is run as a local service.

Network congestion

Because the agent is run locally, network bandwidth usage should only be affected by other local processes.

Number of concurrent clients

Because the agent is run locally, the number of concurrent clients talking to the server (the agent in this case) is limited to just you. It no longer scales with the number of team members you have.

Server processing time

The work done to compute the mock for a request scales logarithmically with the total number of recorded requests. In comparison, the work done by a live service generally scales with a far greater number of factors. These factors include and are not limited to the responsiveness of upstream service dependencies and data source sizes. The larger the number of data sources and the more data each source has, the longer it takes for a service to compute a response.

Last updated