Enable E2E Testing

Current State of E2E Testing

The following table breaks down the testing pyramid. It also provides a summary of the relative proportion of the total test effort each form of testing takes.

Test
Summary
Percentage

Unit

Test individual components in isolation

70

Integration

Test interactions between multiple components

20

E2E

Test the application from the user's perspective

10

For applications with little dependence on external API's (third-party services or libraries), most of the expected behavior is defined within the application itself. That is, component input and return value types are owned by the application. In order to validate expected behavior correctness, we need a high degree of test variance to sufficiently cover input to return value mappings.

Applications with little dependence on external API's tend to lean more heavily into unit tests.

Applications with high dependence on external API's, define relatively less new expected behavior and focus instead on integration. For example, a user facing applications may call several external API's and use an externally maintained development framework. There is far less of a concern regarding validating individual component. Instead, the primary concern for the new expected behavior is whether the integration of components meet user needs.

Applications with high dependence on external API's should lean more heavily into E2E tests.

The following table offers a revised relative proportion of the total test effort breakdown for applications with a high dependence on external API's.

Test
Summary
Percentage

Unit

Test individual components in isolation

10

Integration

Test interactions between multiple components

20

E2E

Test the application from the user's perspective

70

The above breakdown is an idealized goal of how testing should look like for these types of applications. In reality, there are several barriers to entry that make this breakdown costly to achieve. When E2E testing, we want to address the following barriers to entry:

  • How do we ensure tests complete within a reasonable amount time?

  • How do we minimize flaky tests?

  • How do we minimize CI setup time?

How Stoobly Helps

Minimize Run Time

Latency is affected by the following factors:

Problem
Description

Distance between client and server

The further the physical distance, the longer it takes for data to be transferred

Network congestion

Increased traffic negatively impacts queing times

Number of concurrent clients

Resource bottlenecked APIs may have to service other requests

Server processing time

The requested action may have to perform slow actions e.g. reaching out to external APIs

By replacing the live service with a mock service running locally, we can minimize the impact of all the above factors.

Distance between client and server

When mocking, the distance traveled for a request is from the client to agent. By default, the agent is run as a local service.

Network congestion

Because the agent is run locally, network bandwidth usage should only be affected by other local processes.

Number of concurrent clients

Because the agent is run locally, the number of concurrent clients talking to the server (the agent in this case) is limited to just you. It no longer scales with the number of team members you have.

Server processing time

The work done to compute the mock for a request scales logarithmically with the total number of recorded requests. In comparison, the work done by a live service generally scales with a far greater number of factors. These factors include and are not limited to the responsiveness of upstream service dependencies and data source sizes. The larger the number of data sources and the more data each source has, the longer it takes for a service to compute a response.

Minimize Flakiness

To minimize flaky tests, we should use mocks in place of calling live services. Examples of live services include authentication, internal API's, and external APIs such as Stripe for payments or Twilio for SMS messages. While depending on live services has the following advantages:

  • Maintenance and updates by a dedicated team

  • Data created by maintainers

It also incurs the following disadvantages:

  • Being down unexpectedly

  • Inconsistent responses due to updates to a shared service

  • Having long response latencies

These disadvantages are what make E2E tests flaky. To help address the flakiness, we can record requests to create mock APIs. The following provides an overview of how Stoobly can help:

  1. Run the E2E test to trigger sending requests

  2. Stoobly will intercept the request and record it

  3. Configure Stoobly to mock instead of record requests

  4. Run E2E tests, API tests, or UI tests

But using in mocks in place of live services may sound counter-intuitive when it comes to E2E testing. After all, isn't the point of E2E testing to validate real user flows? This is true in the case where mocks are consumer generated. Consumer generated mocks have a tendency to not represent real data. That is, consumers likely do not have the same understanding of API responses as the maintainers of the system that produces it. Furthermore, mocks may become out of date as request contracts change.

With Stoobly, recorded mocks can be asynchrously validated with API testing.

Minimize CI Setup Time

With E2E testing, a common challenge is figuring out how to integrate your tests, your API mocks, and test related infrastructure into a continuous integration (CI) environment. These test environments require the following:

  • Dependent services should be running

    • API mocks that represent them must be accessible

  • A test runner or pipeline to initiate tests

    • e.g. Cypress, Selenium, Playwright

  • Tooling to manage environment configurations

The time required to integrate these parts scale both the number of dependent services as well as with the following challenges:

Challenge
Description

Debugability

When a test fails, easily determine the cause of the failure

Maintainability

How quickly can tests or dependent services be modified

Configuration

How to separate configuration for different workflows e.g. development and CI

Given the above challenges, developing a robust CI setup for E2E testing can take a few weeks to several months. This depends on the complexity of the application and the number of engineers dedicated to its development.

Stoobly helps reduce CI setup time from 2 weeks to 1 day.

Stoobly simplies the CI setup process to defining service and workflow configurations. With these configurations, Stoobly will generate maintained CI-ready tooling.

Getting Started

How to Scaffold an App

Last updated