Testing Tips
We run several kinds of tests at Sentry as part of our CI process. This section aims to document some of the sentry specific helpers and give guidelines on what kinds of tests you should consider including when building new features.
Getting Setup
The acceptance and python tests require a functioning set of devservices. It is
recommended you use devservices
to ensure you have the required services
running. If you also use your local environment for local testing you will want
to use the --project
flag to separate local testing volumes from test suite
volumes:
# Turn off services for local testing.
sentry devservices down
# Turn on services with a test prefix to use separate containers and volumes
sentry devservices up --project test
# Verify that test containers came up correctly
docker ps --format '{{.Names}}'
# Later when you're done running tests and want to run local servers again
sentry devservices down --project test && sentry devservices up
When using the --project
option you can confirm which containers are running
docker ps
. Each running container should be prefixed with test_
. See the
devservices docs section for more information on
managing services.
Python Tests
For python tests we use pytest and testing tools
provided by Django. On top of this foundation we've added a few base test cases (in sentry.testutils.cases
).
Endpoint integration tests is where the bulk of our test suite is focused. These tests help us ensure that the APIs our customers, integrations and front-end application continue to work in expected ways. You should endeavour to include tests that cover the various user roles, and cross organization/team access scenarios, as well as invalid data scenarios as those are often overlooked when manually testing.
Running pytest
You can use pytest to run a single directory, single file, or single test depending on the scope of your changes:
# Run tests for an entire directory
pytest tests/sentry/api/endpoints/
# Run tests for all files matching a pattern in a directory
pytest tests/sentry/api/endpoints/test_organization_*.py
# Run test from a single file
pytest tests/sentry/api/endpoints/test_organization_group_index.py
# Run a single test
pytest tests/snuba/api/endpoints/test_organization_events_distribution.py::OrganizationEventsDistributionEndpointTest::test_this_thing
# Run all tests in a file that match a substring
pytest tests/snuba/api/endpoints/test_organization_events_distribution.py -k method_name
Some frequently used options for pytest are:
-k
Filter test methods/classes by a substring.-s
Don't capture stdout when running tests.
Refer to the pytest docs for more usage options.
Creating data in tests
Sentry has also added a suite of factory helper methods that help you build data to write your tests against.
The factory methods in sentry.testutils.factories
are available on all our test suite classes. Use these methods
to build up the required organization, projects and other postgres based state.
You should also use store_event()
to store events in a similar way that the
application does in production. Storing events requires your test to inherit
from SnubaTestCase
. When using store_event()
take care to set a timestamp
in the past on the event. When omitted, the timestamp is uses 'now' which can
result in events not being picked due to timestamp boundaries.
from sentry.testutils.helpers.datetime import before_now
from sentry.utils.samples import load_data
def test_query(self):
data = load_data("python", timestamp=before_now(minutes=1))
event = self.store_event(data, project_id=self.project.id)
Setting options and feature flags
If your tests are for feature flagged endpoints, or require specific options to be set. You can use helper methods to mutate the configuration data into the right state:
def test_success(self):
with self.feature('organization:new-thing'):
with self.options({'option': 'value'}):
# Run test logic with features and options set.
# Disable the new-thing feature.
with self.feature({'organization:new-thing': False}):
# Run you logic with a feature off.
External Services
Use the responses
library to add stub responses for an outbound API requests your code is making.
This will help you simulate success and failure scenarios with relative ease.
Working with time reliably
When writing tests related to ingesting events we have to operate within the constraint of events cannot be older than 30 days. Because all events must be recent, we cannot use traditional time freezing strategies to get consistent data in tests. Instead of choosing arbitrary points in time we work backwards from the present, and have a few helper functions to do so:
from sentry.testutils.helpers.datetime import before_now, iso_format
five_min_ago = before_now(minutes=5)
iso_timestamp = iso_format(five_min_ago)
These functions generate datetime objects, and ISO 8601 formatted datetime strings relative to the present enabling you to have events at known time offsets without violating the 30 day constraint that relay imposes.
Inspecting SQL queries in tests
Add the following to conftest.py
in the project root:
import itertools
from django.conf import settings
from django.db import connection, connections, reset_queries
from django.template import Template, Context
@pytest.fixture(scope="function", autouse=True)
def log_sql():
reset_queries()
settings.DEBUG = True
yield
time = sum([float(q["time"]) for q in connection.queries])
t = Template(
"{% for sql in sqllog %}{{sql.sql|safe}}{% if not forloop.last %}\n\n{% endif %}{% endfor %}"
)
queries = list(itertools.chain.from_iterable([conn.queries for conn in connections.all()]))
log = t.render(Context({"sqllog": queries, "count": len(queries), "time": time}))
print(log)
Now all SQL executed during tests will be printed to stdout. It's recommended
to narrow down the output by using pytest's -k
selector. Also note that
you'll need to pass -s
to see stdout.
Acceptance Tests
Our acceptance tests leverage selenium and chromedriver to simulate a user using the front-end application and the entire backend stack. We use acceptance tests for two purposes at Sentry:
- To cover workflows that are not possible to cover with just endpoint tests or with Jest alone.
- To prepare snapshots for visual regression tests via our visual regression GitHub Actions.
Acceptance tests can be found in tests/acceptance
and run locally with pytest
.
Running acceptance tests
When you run acceptance tests, webpack will be run automatically to build static assets.
If you change Javascript files while creating or modifying acceptance tests, you'll need
to rm .webpack.meta
after each change to trigger a rebuild on static assets.
# Run a single acceptance test.
pytest tests/acceptance/test_organization_group_index.py \
-k test_with_onboarding
# Run the browser with a head so you can watch it.
pytest tests/acceptance/test_organization_group_index.py \
--no-headless=true \
-k test_with_onboarding
# Open each snapshot image
SENTRY_SCREENSHOT=1 VISUAL_SNAPSHOT_ENABLE=1 \
pytest tests/acceptance/test_organization_group_index.py \
-k test_with_onboarding
Tip!
Locating Elements
Because we use emotion our classnames are generally not useful to browser automation. Instead
we liberally use data-test-id
attributes to define hook points for browser automation and Jest
tests.
Dealing with Asynchronous actions
All of our data is loaded asynchronously into the front-end and acceptance tests need to account for
various latencies and response times. We favour using selenium's wait_until*
features to poll the DOM
until elements are present or visible. We do not use sleep()
.
Visual Regression
Pixels Matter and because of that we use visual regressions to help catch unintended changes to how Sentry is rendered. During acceptance tests we capture screenshots and compare the screenshots in your pull request to approved baselines.
While we have fairly wide coverage with visual regressions there are a few important blind spots:
- Hover cards and hover states
- Modal windows
- Charts and data visualizations
All of these components and interactions are generally not included in visual snapshots, and you should take care when working on any of them.
Dealing with always changing data
Because visual regression compares image snapshots, and a significant portion of
our data deals with timeseries data we often need to replace time based content
with 'fixed' data. You can use the getDynamicText
helper to provide fixed
content for components/data that is dependent on the current time or varies too
frequently to be included in a visual snapshot.
Jest Tests
Our Jest suite covers providing functional and unit testing for frontend components. We favour writing functional tests that interact with components and observe outcomes (navigation, API calls) over checking prop passing and state mutations. See the Frontend Handbook for more Jest testing tips.
# Run jest in interactive mode
yarn test
# Run a single test
yarn test tests/js/spec/views/issueList/overview.spec.js
API Fixtures
Because our Jest tests run without an API we have a variety of fixture builders available to help generate
API response payloads. The TestStubs
global includes all the fixture functions in tests/js/sentry-test/fixtures/
.
You should also use MockApiClient.addMockResponse()
to set responses for API calls your components will make. Failing
to mock an endpoint will result in tests failing.