# End to end tests
## Table of Contents
* [Introduction](#introduction)
* [Flaky tests](#flaky-tests)
* [What is a flake](#what-is-a-flake)
* [Why flakes are problematic](#why-flakes-are-problematic)
* [Preventing flakes](#preventing-flakes)
* [If the end-to-end tests are failing on your PR](#if-the-end-to-end-tests-are-failing-on-your-pr)
* [Layout of the E2E test files](#layout-of-the-e2e-test-files)
* [Suite files](#suite-files)
* [`core/tests/protractor`](#coretestsprotractor)
* [`core/tests/protractor_desktop`](#coretestsprotractor_desktop)
* [`core/tests/protractor_mobile`](#coretestsprotractor_mobile)
* [Utilities](#utilities)
* [`core/tests/protractor_utils`](#coretestsprotractor_utils)
* [`extensions/**/protractor.js`](#extensionsprotractorjs)
* [Run E2E tests](#run-e2e-tests)
* [Write E2E tests](#write-e2e-tests)
* [Where to add the tests](#where-to-add-the-tests)
* [Interactions](#interactions)
* [Existing suite](#existing-suite)
* [New suite](#new-suite)
* [Writing the tests](#writing-the-tests)
* [Writing utilities](#writing-utilities)
* [Selecting elements](#selecting-elements)
* [Non-Angular pages](#non-angular-pages)
* [Writing robust tests](#writing-robust-tests)
* [Flakiness](#flakiness)
* [Independence](#independence)
* [Checking for flakiness](#checking-for-flakiness)
* [Codeowner Checks](#codeowner-checks)
* [Important Tips](#important-tips)
* [Metrics](#metrics)
* [Reference](#reference)
* [Forms and objects](#forms-and-objects)
* [Rich Text](#rich-text)
* [Async-Await Tips](#async-await-tips)
* [Good Patterns](#good-patterns)
* [Anti-Patterns](#anti-patterns)
* [Known kinds of flakes](#known-kinds-of-flakes)
* [document unloaded while waiting for result](#document-unloaded-while-waiting-for-result)
## Introduction
At Oppia, we highly regard the end user, so we have end-to-end (E2E) tests to test our features from the user's perspective. These tests interact with pages just like a user would, for example by clicking buttons and typing into text boxes, and they check that pages respond appropriately from the user's perspective, for example by checking that the correct text appears in response to the user's actions.
## Flaky tests
### What is a flake
Unfortunately, E2E tests are much less deterministic than our other tests. The tests operate on a web browser that accesses a local Oppia server, so the non-determinism of web browsers makes the tests less deterministic as well. For example, suppose that you write a test that clicks a button to open a modal and then clicks a button inside the modal to close it. Sometimes, the modal will open before the test tries to click the close button, so the test will pass. Other times, the test will try to click before the modal has opened, and the test will fail. We can see this schematically:
```text
<---A--->
+-------+
| Modal |
+----------+ +---//---+ opens +-----------+
| Click to | | +-------+ |
| open +---+ +---->
| modal | | +-------------+ |
+----------+ +---//---+ Click to +-----+
| close modal |
+-------------+
<---B--->
--------------------- time ---------------------->
```
The durations of steps `A` and `B` are non-deterministic because `A` depends on how quickly the browser executes the frontend code to open the modal, and `B` depends on how fast the test code runs. Since these operations are happening on separate processes, the operating system makes no guarantees about which will complete first. In other words, we have a race condition.
This race condition means that the test can fail randomly even when there's nothing wrong with the code of the Oppia application (excluding tests). These failures are called _flakes_.
### Why flakes are problematic
Flakes are annoying because they cause failures on PRs even when the code changes in those PRs are fine. This forces developers to rerun the failing tests, which slows development.
Further, flakes are especially problematic to certain groups of developers:
* **New contributors**, who are often brand-new to open source software development, can be discouraged by flakes. When they see a failing E2E test on their PR, they may think that they made a mistake and become frustrated when they can't find anything wrong with their code.
* **Developers without write access to the repository** cannot rerun tests, so they have to ask another developer to restart their tests for them. Waiting for someone to restart their tests can really slow down their work.
Finally, flakes mean that developers rerun failing tests more readily. We even introduced code to automatically rerun tests under certain conditions. These reruns make it easier for new flakes to slip through because if a new flake causes a test to fail, we might just rerun the test until it passes.
### Preventing flakes
Conceptually, preventing flakes is easy. We can use `waitFor` statements to make the tests deterministic despite testing a non-deterministic system. For example, suppose we have a function `waitForModal()` that waits for a modal to appear. Then we could write our test like this:
```text
<---A--->
+-------+
| Modal |
+----------+ +---//---+ opens +---------------------------------+
| Click to | | +-------+ |
| open +---+ +---->
| modal | | +----------------+ +-------------+ |
+----------+ +---//---+ waitForModal() +-//-+ Click to +-----+
+----------------+ | close modal |
+-------------+
<---B---><-------C-------->
--------------------- time -------------------------------------------->
```
Now, we know that the test code won't move past `waitForModal()` until after the modal opens. In other words, we know that `B + C > A`. This assures us that the test won't try to close the modal until after the modal has opened.
The challenge in writing robust E2E tests is making sure to always include a waitFor statement like `waitForModal()`. It's common for people to write E2E tests and forget to include a waitFor somewhere, but when they run the tests, they pass. Their tests might even pass consistently if their race condition only causes the test to fail very rarely. However, months later, an apparently unrelated change might change the runtimes enough that one of the test starts flaking frequently.
[Below](#writing-e2e-tests), we'll discuss specific techniques you should use to prevent flakes in new tests that you write.
### If the end-to-end tests are failing on your PR
First, check that your changes couldn't be responsible. For example, if your PR updates the README, then there's no way it caused an E2E test to fail.
If your changes could be responsible for the failure, you'll need to investigate more. Try running the test locally on your computer. If it fails there too, you can debug locally. Even if you can only reproduce the flake on CI, there are lots of other ways you can debug. See our [guide to debugging E2E tests](Debug-end-to-end-tests.md).
If you are _absolutely certain_ that the failure was not caused by your changes, then you can restart the test. Remember that restarting tests can let new flakes into our code, so please be careful.
## Layout of the E2E test files
E2E test logic is divided between two kinds of files: suite files and utility files. Utility files provide functions for interacting with pages, for example by clicking buttons or checking that the expected text is visible. Suite files define the E2E tests using calls to the utility files.
Suppose you wanted to write an E2E test that changes a user's profile picture and then checks that the change was successful. Your utility file might define `setProfilePicture()` and `checkProfilePicture` functions. Then your suite file would first call `setProfilePicture()` and then call `checkProfilePicture()`.
### Suite files
Note that "suite files" are also known as "test files."
#### `core/tests/protractor`
This directory contains test suites which were applicable to both desktop and mobile interfaces. However, we don't run the mobile tests anymore. Certain operations were possible only on one or the other interface. To distinguish between the interfaces, we use the boolean, `browser.isMobile` defined in `onPrepare` of the protractor configuration file. Even though we don't run the mobile tests anymore, you might see some legacy code that uses this boolean.
#### `core/tests/protractor_desktop`
This directory houses all test suites which are exclusive to desktop interfaces. This generally includes core creator components like the rich-text editor.
#### `core/tests/protractor_mobile`
This directory contains all test suites which are exclusive to mobile interfaces. This includes navigating around the website using the hamburger menu. However, we don't run these tests anymore.
### Utilities
#### `core/tests/protractor_utils`
This directory contains utilities for performing actions using elements from the core components of Oppia (those found in `core/templates`).
The core protractor utilities consist of the following files:
* Page objects, for example `AdminPage` in `AdminPage.js`. These objects provide functions for interacting with a particular page.
* `forms.js`: Utilities for interacting with forms.
* `general.js`: Various utilities that are useful for many different pages.
* `users.js`: Utilities for creating users, logging in, and logging out.
* `waitFor.js`: Utilities for delaying actions with Protractor's ExpectedConditions. This lets you wait for some condition to be true before proceeding with the test.
* `workflow.js`: Functions for common tasks like creating explorations and assigning roles.
* `action.js`: Functions for common interactions with elements, such as clicking or sending keys. All new tests should use these functions instead of interacting with elements directly because these functions include appropriate waitFor statements. For example, use `action.click('Element name', elem)` instead of `elem.click()`.
The protractor tests use the above functions to simulate a user interacting with Oppia. They should not interact with the page directly (e.g. using `element()`) but instead make use of the utilities in `protractor_utils/`. If new functionality is needed for a test then it should be added in the utilities directory, so that is available for future tests to use and easy to maintain.
#### `extensions/**/protractor.js`
Extensions provide `protractor.js` files to make them easier to test. The E2E test files call the utilities provided by these files to interact with an extension. For example, interactions include a `protractor.js` file that provides functions for customizing an interaction and checking that the created interaction matches expected criteria.
## Run E2E tests
If you don't know the name of the suite you want to run, you can find it in `core/tests/protractor.conf.js`. Then you can run your test like this:
```console
$ python -m scripts.run_e2e_tests --suite="suiteName"
```
Chrome will open and start running your tests.
## Write E2E tests
### Where to add the tests
#### Interactions
If you are just creating a new interaction and want to add end-to-end tests for it then you can follow the guidance given at [Creating Interactions](Creating-Interactions.md), though the [forms and objects](#forms-and-objects) section of this page may also be helpful.
If you are adding functionality to an existing interaction, you can probably just add test cases to its `protractor.js` file. For example, the `AlgebraicExpressionInput` interaction's file is at [`oppia/extensions/interactions/AlgebraicExpressionInput/protractor.js`](https://github.com/oppia/oppia/blob/develop/extensions/interactions/AlgebraicExpressionInput/protractor.js).
#### Existing suite
First, take a look at the existing test suites in [`core/tests/protractor`](https://github.com/oppia/oppia/tree/develop/core/tests/protractor) and [`core/tests/protractor_desktop`](https://github.com/oppia/oppia/tree/develop/core/tests/protractor_desktop). If your test fits well into any of those suites, you should add it there.
#### New suite
If you need to, you can add a new test suite to [`core/tests/protractor_desktop`](https://github.com/oppia/oppia/tree/develop/core/tests/protractor_desktop) like this:
1. Create the new suite file under `core/tests/protractor_desktop`.
2. Add the suite to [`core/tests/protractor.conf.js`](https://github.com/oppia/oppia/blob/develop/core/tests/protractor.conf.js).
3. Add your new suite to GitHub Actions, whose workflow files are in [`.github/workflows`](https://github.com/oppia/oppia/tree/develop/.github/workflows). If there is an existing workflow that your suite would fit well with, add your suite there. Otherwise, create a new workflow. Note that we want all CI workflows to finish in less than 30 minutes, so check the workflow runtimes after your change!
### Writing the tests
1. Think through what user journeys you want to test. Each user journey is a sequence of actions that a user could take. The end-to-end tests you write should execute those steps and make sure that Oppia behaves appropriately. Remember:
* Test everything from the user's perspective. For example, instead of jumping to a page by the URL, navigate to the page using the links on the webpage like a user would.
* Check the "happy paths" where the user does what you expect.
* Check the "unhappy paths" where the user does something wrong. For example, if a text field only accepts 30 characters, your test should try entering 31 characters to make sure the appropriate error messages appear.
* E2E tests are relatively "expensive," meaning that they take a while to run. Therefore, you should avoid testing something twice wherever possible. This usually means that fewer, larger tests are preferable to more, smaller tests. For example, consider these tests:
* Test exploration creation by creating a new exploration.
* Test exploration deletion by creating a new exploration and then deleting it.
Notice that we create an exploration in both tests. It would be more efficient to combine these into a single test:
* Test exploration creation and deletion by creating an exploration and then deleting it.
2. Write the [utilities](#writing-utilities) you will need. Your test file should never interact with the page directly. Use utilities instead. A good way to check that you're doing all page interactions through the utilities is to ensure that you have no element selectors (e.g. `element(by.css(...))`) in your suite files.
3. Write the tests! Each test should step through one of your user journeys, asserting that the page is in the expected state along the way.
For information on writing tests with protractor, see the [protractor documentation](https://www.protractortest.org/#/). If you need to work out why your tests aren't working, check out our [debugging guide for E2E tests](Debug-end-to-end-tests.md).
### Writing utilities
#### Selecting elements
Much of the difficulty of writing protractor code lies in specifying the element with which you wish to interact. It is important to do so in a way that is as insensitive as possible to superficial DOM features such as text and styling, so as to reduce the likelihood that the test will break when the production HTML is changed. Here are some ways to specify an element, in order of decreasing preference:
1. Adding a `protractor-test-some-name` class to the element in question, and then referencing it by `by.css('.protractor-test-some-name')`. We do not use `by.id` for this purpose because Oppia frequently displays multiple copies of a DOM element on the same page, and if an `id` is repeated then references to it will not work properly. This is the preferred method, since it makes clear to those editing production code exactly what the dependence on protractor is, thus minimizing the likelihood of confusing errors when they make changes. Sometimes this may not work, though (e.g. for embedded pages, third-party libraries and generated HTML), in which case you may instead need to use one of the options below.
2. Using existing element ids. We avoid using existing classes for this purpose as they are generally style specifications such as `big-button` that may be changed in the future.
3. You can use `by.tagName` if you are sure you are in a context where only one element will have (and is likely to have in future) the given name. The `` and `