Introduction to Speedscale

In 2021, it is becoming more clear than ever that microservices are the most common application development architecture. Companies are moving away from monoliths and centralized SOA architectures to a distributed set of services and APIs. And with that shift, new approaches to testing need to be taken into account. Speedscale is a 2-year old company with a novel approach to API testing that is designed specifically for a microservice environment.

In the past several years, there has been a shift to treating infrastructure as “cattle” and regenerating it on the fly. Speedscale brings a similar concept to test cases by regenerating a test suite on the fly, as opposed to the traditional approach of carefully writing and maintaining each test case.

Challenges of API Testing

When you sit down to start working with a new API, the first thing you want to do is understand how to make a request. Immediately, as a tester, you will run into a couple of problems:

  • It can be hard to understand exactly how to call the API. Even with specifications like Swagger, there are frequently other details like authentication, non-prod endpoints, field-level data requirements (this list could go on and on).
  • In a non-prod environment, all the other services and backend dependencies need to be configured and running for a test transaction to work.
  • Many interesting API calls consume or modify the test data, so before you can fully automate you need a plan for how to reset the test data.

These are the realities of dealing with microservices, and it can take a long time to get test automation developed and running at scale. If you attempt to manually code every test case and mock endpoint, you will not be able to keep up with the high rate of change of modern development. A new approach needs to be taken.

Speedscale Addresses These Challenges

Instead of having to read tons of specifications and try out numerous endpoints, Speedscale integrates directly with the test environment to process API Traffic. It uses this deep detailed information in the following key ways:

  • Traffic Viewer shows you every call going into and out of each microservice. The inbound calls are useful to understand how a service should be tested. The outbound calls help identify the backend dependencies and how they are used.
  • Traffic Snapshot lets you capture a subset of interesting data for deeper analysis. It identifies all the calls to the microservice and then automatically generates a test suite and service mocks, so you can replay the snapshot over and over.
  • Traffic Replay is where you run snapshots through various scenarios such as multiplying the load or introducing chaos with the backend dependencies. And because the service mocks are available, this can drastically cut down on the need to reset test data between runs.

Traffic Viewer

At first glance, this looks a lot like a logging view. It shows the time of every single call and lets you filter down to a specific time range or certain transaction that you are trying to troubleshoot.

Introduction to Speedscale - Traffic Viewer
Traffic Viewer

However, in addition to listing everything out, you also see the entire detail of the call including the full request, headers, response, status code, duration, etc. You can even download the transaction to your own machine and replay it locally with the CLI.

Response
Response

Traffic Snapshot Report

Once you have identified the subset of traffic that you want to use for your snapshot, you generate it either through the UI or via a command-line call.

speedctl snapshot create settings.json

This results in a service map that shows all the backend dependencies, whether they are other services inside the VPC or cluster, or if they are third parties.

Snapshot
Snapshot

Traffic Replay Report

Now you have a subset of traffic that can be replayed against your microservice, and you can either spin it up in a docker container to run the load, or orchestrate it to run inside a Kubernetes cluster with a set of annotations. Here is an example Kubernetes patch file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment
  annotations:
    test.speedscale.com/scenarioid: cf0ffd03-6982-4c4c-a701-eb098306cd88
    test.speedscale.com/testconfigid: standard
    test.speedscale.com/cleanup: "true"
    sidecar.speedscale.com/inject: "true"

Through the use of a simple configuration, you can choose how to assert the results, how to ramp up the load, and whether the backends should behave chaotically. After running the replay, there is a detailed report showing standard metrics like latency, throughput, success rate, as well as infrastructure details like CPU and Memory.

Performance
Performance

Drill down even further to see every single call of the report and view a side-by-side comparison of how the microservice behaved relative to the last version of your API. If there are fields that change every time, like timestamps and unique ids, you can easily ignore those to improve your success rate.

Comparison
Comparison

CI Integration

Now you can take these API functional and load tests and simply integrate them with Continuous Integration, so they run on every build. This way, you can “set it and forget it” for this microservice, and it gets easily captured by the CI system.

CI/CD
CI/CD

Summary

The microservice wave is coming, and there will soon be hundreds more endpoints to be tested. If you expect to be able to write every test case by hand, you will be rushing from one service to the next and likely won’t be able to catch up. Instead of relying upon “testing in production”, take a look at new approaches for API testing, such as this short review of Speedscale.

About the Author

1 thought on “Introduction to Speedscale”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Hamster - Launch JMeter Recent Test Plans SwiftlyDownload for free
+
Share via
Copy link