Docs
Frontend modules
Testing

Testing frontend modules

Broadly speaking, when it comes to testing components in O3, there are three categories of tests:

  1. Unit tests - verify that pieces individual, isolated pieces of functionality work together as expected.
  2. Integration tests - verify that several units work together in harmony.
  3. End to End (e2e) tests - a helper robot that behaves like a user to click around the app and verify that it functions correctly.

When working with a frontend module, its accompanying unit (and, sometimes, integration) tests will typically be colocated in a file named *.test.tsx. These tests typically follow the following pattern:

  • Integration tests typically render the full app while mocking out a few bits like network requests or database access.
  • Unit tests typically test pure functions and assert that they return a given output when given some input.
  • e2e tests typically run the entire application (both frontend and backend) and your test will interact with the app just like a typical user would.
ℹ️

Testing philosophy: The more your tests resemble the way your software is used, the more confidence they can give you.

Unit testing

We’re going to walk through an example test suite to understand the approach taken. In this case, we’ll use the module-management (opens in a new tab) frontend module. This application takes the Manage Module page from the OpenMRS 2.x reference application and redesigns it to work within O3. It provides an interface for users to manage backend modules. It lists all the installed modules and allows admins to control the execution of modules via the exposed Start, Stop and Unload actions. Users can also view detailed information about the listed modules.

Let’s look at how we could test the ModuleManagement component. Looking at its code, we can see that it does several things:

  • It fetches module data from the backend using a custom SWR hook.
  • When data is loading, a loading state gets displayed.
  • When data is loaded but empty, an empty state is displayed.
  • When a problem occurs while fetching data, an error state is displayed.
  • When data gets fetched, a Carbon datatable containing information about existing modules is rendered.
  • If the user viewing the module list has sufficient privileges, action buttons get shown on the page.

We could write a test suite that tests each of these pathways.

Write the test

Create a new file next to module-management.component.tsx named module-management.test.tsx.

Setup a function that renders the component:

function renderModuleManagement() {
  renderWithSwr(<ModuleManagement />);
}

Note: This renderWithSwr helper function wraps a component in an SWR context which provides a global configuration for all SWR hooks.

Test that a loading state gets rendered

We don’t want to directly hit the database with our test and to avoid that, we could mock the data fetching logic as follows:

mockedOpenmrsFetch.mockReturnValueOnce({ data: { results: [] } });

This code mocks out the openmrsFetch function, stubbing out its implementation and replacing it with a mock that returns an object with a results property that's an empty array.

Next, we want to render the component:

await waitForElementToBeRemoved(() => [...screen.queryAllByRole(/progressbar/i)], {
  timeout: 4000,
});

This code asks Jest to wait until the loader gets removed from the UI. This mimics waiting for a backend request to resolve. Next, we want to start writing our assertions:

expect(screen.queryByRole("progressbar")).not.toBeInTheDocument();
expect(screen.queryByRole("table")).not.toBeInTheDocument();
expect(screen.getByText(/there are no modules to display/i)).toBeInTheDocument();

These assertions:

  1. Assert that a DOM element with the role progressbar should not be in the document.
  2. Assert that the DOM doesn’t contain an element of the role table .
  3. Assert that the text There are no modules to display gets rendered in the document.

At any point, you can run tests by calling yarn test or yarn turbo test (if turbo is configured).

To evaluate the error state, you could write:

it("renders an error state view if there was a problem fetching module data", async () => {
  const error = {
    message: "Internal Server Error",
    response: {
      status: 500,
      statusText: "Internal Server Error",
    },
  };
 
  // Syntactic sugar for `mockedOpenmrsFetch.mockReturnValue(Promise.resolve(error))`
  mockedOpenmrsFetch.mockRejectedValueOnce(error);
 
  renderModuleManagement();
 
  // Wait for the loading state to disappear from the screen
  await waitForLoadingToFinish();
 
  expect(screen.getByText(/sorry, there was a problem fetching modules/i)).toBeInTheDocument();
});

To test the happy path, we could write the following test:

it("renders detailed information about the module", async () => {
  const user = userEvent.setup();
 
  const testModules = [
    {
      uuid: "initializer",
      display: "Initializer",
      name: "Initializer",
      description:
        "The OpenMRS Initializer module is an API-only module that processes the content of the configuration folder when it is found inside OpenMRS' application data directory.",
      packageName: "org.openmrs.module.initializer",
      author: "Mekom Solutions",
      version: "2.3.0",
      started: true,
      startupErrorMessage: null,
      requireOpenmrsVersion: "2.1.1",
      awareOfModules: [
        "org.openmrs.module.metadatamapping",
        "org.bahmni.module.bahmni.ie.apps",
        "org.openmrs.module.datafilter",
        "org.openmrs.module.htmlformentry",
        "org.bahmni.module.appointments",
        "org.openmrs.module.metadatasharing",
        "org.openmrs.module.openconceptlab",
        "org.bahmni.module.bahmnicore",
        "org.openmrs.module.idgen",
        "org.openmrs.module.legacyui",
      ],
      requiredModules: [],
      resourceVersion: "1.8",
    },
    {
      uuid: "serialization.xstream",
      display: "Serialization Xstream",
      name: "Serialization Xstream",
      description: "Core (de)serialize API and services supported by xstream library",
      packageName: "org.openmrs.module.serialization.xstream",
      author: "luzhuangwei",
      version: "0.2.15",
      started: true,
      startupErrorMessage: null,
      requireOpenmrsVersion: "1.9.9",
      awareOfModules: [],
      requiredModules: [],
      resourceVersion: "1.8",
    },
  ];
 
  // Return the `testModules` object defined above in the API response. The nested `data` property below corresponds to the the `data` property returned from calling the `useSWR` hook
  mockedOpenmrsFetch.mockReturnValueOnce({
    data: { results: testModules },
  });
 
  renderModuleManagement();
 
  // Wait for the loading state to disappear from the screen
  await waitForLoadingToFinish();
 
  expect(screen.queryByRole("progressbar")).not.toBeInTheDocument();
  expect(screen.getByRole("button", { name: /add \/ upgrade modules/i })).toBeInTheDocument();
  expect(screen.getByRole("button", { name: /search from addons/i })).toBeInTheDocument();
  expect(screen.getByRole("button", { name: /check for updates/i })).toBeInTheDocument();
  expect(screen.getByRole("button", { name: /start all/i })).toBeInTheDocument();
  expect(screen.getByRole("table")).toBeInTheDocument();
  expect(screen.getByRole("heading", { name: /manage modules/i })).toBeInTheDocument();
  expect(screen.getByRole("searchbox", { name: /filter table/i })).toBeInTheDocument();
  expect(screen.getByRole("button", { name: /clear search input/i })).toBeInTheDocument();
 
  const modules = testModules.map((module) => module);
 
  modules.forEach((module) => {
    expect(screen.getByText(module.name)).toBeInTheDocument();
    expect(screen.getByText(module.description)).toBeInTheDocument();
  });
 
  const expectedColumnHeaders = [/status/, /name/, /author/, /version/, /Description/];
 
  expectedColumnHeaders.forEach((row) => {
    expect(screen.getByRole("columnheader", { name: new RegExp(row, "i") })).toBeInTheDocument();
  });
 
  const searchbox = screen.getByRole("searchbox", { name: /filter table/i });
 
  // search for the Serializer module
  await user.type(searchbox, "Seri");
 
  expect(screen.getByText(/Serialization Xstream/i)).toBeInTheDocument();
  expect(screen.queryByText(/Initializer/i)).not.toBeInTheDocument();
 
  await user.clear(searchbox);
 
  // search for something that doesn't exist in the module list
  await user.type(searchbox, "super-duper-unreleased-module");
 
  expect(screen.queryByText(/Serialization Xstream/i)).not.toBeInTheDocument();
  expect(screen.queryByText(/Initializer/i)).not.toBeInTheDocument();
  expect(screen.getByText(/no matching modules found/i)).toBeInTheDocument();
});

There’s a lot going on here, so let’s attempt to break it down. To start off, we’re invoking the userEvent setup before the component gets rendered. Next, we’re setting up an object called testModules which mimics the data we’d typically expect to get back from the backend. We’re then setting up the mockedOpenmrsFetch function to return the mock data. After that, we can now finally render the component. Because we’re simulating a delay, we’ll wait for the loading state to complete before running our assertions. Once that’s done, we’ll run assertions against the module management datatable, comparing table headers and row data against our expected data. We’ll then simulate searching through the list, exploring both the happy path and the empty state.

Whereas this test passes and can give us some amount of confidence that this component is working as expected, there are still a few weaknesses with this approach. For example:

  • Because we’re not looking at the API endpoint and the request parameters, we can’t tell if the user is making the correct request.
  • Because we’re mocking the backend, we can’t confidently predict what will happen if the real server is down, or if it returns an unexpected result.

Ultimately, to get the right testing strategy, you’ll likely require a mix of unit, integration, and e2e tests. You’ll also need to figure out the right level of mocking to apply. Mock too little, and you’re likely testing too many implementation details. Mock too much, and you’re likely sacrificing a lot of confidence.

Here's a cool video by Brandon on testing frontend modules:


End-to-end testing with Playwright

Playwright (opens in a new tab) is a testing framework that allows you to write reliable end-to-end tests in JavaScript. Great work by the OpenMRS QA team means that we now have Playwright set up across most of our key repositories.

This means that we can now write e2e tests for key user journeys in the OpenMRS frontend. These tests are setup to run both on commit and merge. Developers are encouraged to keep the tests passing, and wherever possible, consider extending them to fit any new use cases. Ideally, each pull request ought to be accompanied by the tests that assert that the logic presented works as expected.

To get started, you’ll typically need to install playwright:

npx playwright install

We recommend installing the official Playwright VS Code plugin (opens in a new tab). This will give you access to the Playwright test runner, which you can use to run your tests. You can also use the npx playwright test command to run your tests. We also recommend following the best practices (opens in a new tab) outlined in the Playwright documentation. This will help you write tests that are reliable and easy to maintain.

Writing tests

In general, it is recommended to read through the official Playwright docs (opens in a new tab) before writing new test cases. The project uses the official Playwright test runner and, generally, follows a very simple project structure:

e2e
|__ commands
|   ^ Contains "commands" (simple reusable functions) that can be used in test cases/specs,
|     e.g. generate a random patient.
|__ core
|   ^ Contains code related to the test runner itself, e.g. setting up the custom fixtures.
|     You probably need to touch this infrequently.
|__ fixtures
|   ^ Contains fixtures (https://playwright.dev/docs/test-fixtures) which are used
|     to run reusable setup/teardown tasks
|__ pages
|   ^ Contains page object model classes for interacting with the frontend.
|     See https://playwright.dev/docs/test-pom for details.
|__ specs
|   ^ Contains the actual test cases/specs. New tests should be placed in this folder.
|__ support
    ^ Contains support files that requires to run e2e tests, e.g. docker compose files.

When you want to write a new test case, start by creating a new spec in ./specs. Depending on what you want to achieve, you might want to create new fixtures and/or page object models. To see examples, have a look at the existing code to see how these different concepts play together.

The spec files contain scenarios written in a BDD-like syntax. In this syntax, we utilize Playwright's test.step calls and emphasize a user-centric focus by using the "I" keyword. For more information on BDD-like syntax, you can refer to the documentation at https://cucumber.io/docs/gherkin/reference/ (opens in a new tab). This resource provides detailed information on Gherkin, the language used for writing BDD specifications. The code snippet below demonstrates how this BDD-like syntax is employed to write a test case:

test("Search patient by patient identifier", async ({ page, api }) => {
  // extract details from the created patient
  const openmrsIdentifier = patient.identifiers[0].display.split("=")[1].trim();
  const firstName = patient.person.display.split(" ")[0];
  const lastName = patient.person.display.split(" ")[1];
  const homePage = new HomePage(page);
 
  await test.step("When I visit the home page", async () => {
    await homePage.goto();
  });
 
  await test.step("And I enter a valid patient identifier into the search field", async () => {
    await homePage.searchPatient(openmrsIdentifier);
  });
 
  await test.step("Then I should see only the patient with the entered identifier", async () => {
    await expect(homePage.floatingSearchResultsContainer()).toHaveText(/1 search result/);
    await expect(homePage.floatingSearchResultsContainer()).toHaveText(new RegExp(firstName));
    await expect(homePage.floatingSearchResultsContainer()).toHaveText(new RegExp(lastName));
    await expect(homePage.floatingSearchResultsContainer()).toHaveText(new RegExp(openmrsIdentifier));
  });
 
  await test.step("When I click on the patient", async () => {
    await homePage.clickOnPatientResult(firstName);
  });
 
  await test.step("Then I should be in the patient's chart page", async () => {
    await expect(homePage.page).toHaveURL(
      `${process.env.E2E_BASE_URL}/spa/patient/${patient.uuid}/chart/Patient Summary`
    );
  });
});

Running tests

To run e2e tests locally, you'd need to fire up a dev server running the frontend module whose tests you want to run.

Start the dev server

yarn start --sources "packages/esm-patient-conditions-app"

Override the E2E_BASE_URL

# 8080 is the default port for the dev server
export E2E_BASE_URL=http://localhost:8080

Run the tests in headed (opens in a new tab) and UI (opens in a new tab) modes:

yarn test-e2e ---headed --ui

To run a single test file, pass in the name of the test file that you want to run:

yarn test-e2e conditions-form.spec.ts

To run a set of test files from different directories, pass in the names of the directories that you want to run the tests in.

yarn test-e2e specs/conditions specs/programs

To run files that have the text conditions in the file name, simply pass in these keywords to the CLI:

yarn test-e2e conditions

To run a test with a specific title, use the -g flag followed by the title of the test:

yarn test-e2e conditions -g "Record and edit a condition"

Demo data usage

To ensure that the test contains the necessary metadata, you may follow the procedures outlined below:

  1. Utilize the user interface - Suppose the test scenario involves editing patient information. In this case, you can use the UI to create a new patient record for testing purposes.
  2. Leverage demo data - If the test case only requires data viewing, the demo data available in the RefApp can suffice. Check the demo data module to know more.
  3. Generate the required data - In case the above solutions are inadequate, you can use the API to create the necessary data ahead of the test.

Debugging Tests

Refer to this documentation (opens in a new tab) on how to debug a test.

View test reports from GitHub Actions / Bamboo

To download the report from the GitHub action/Bamboo plan, follow these steps:

  1. Go to the artifact section of the action/plan and locate the report file.
  2. Download the report file and unzip it using a tool of your choice.
  3. Follow the steps mentioned in this guide on how to view the HTML report (opens in a new tab)

The report will show you a full summary of your tests, including information on which tests passed, failed, were skipped, or were flaky. You can explore the details of individual tests, including any errors or failures, video recordings, traces and the steps involved in each test. Simply click on a test to view its details.

Do's and Don'ts

  • Do ensure that all test cases are written clearly and concisely, with step-by-step instructions that can be easily understood.
  • Do use a variety of test cases to cover all possible scenarios, including best-case scenarios, worst-case scenarios, and edge cases.
  • Do ensure that all tests are executed in a timely and efficient manner to save time and resources.
  • Don't assume that a feature is working just because it seems to be functioning correctly. Test it thoroughly to ensure that all its features and functionalities are working as expected.
  • Don't ignore any errors or issues that arise during testing, even if they seem minor. Report them to the development team so that they can be addressed promptly.
  • Don't skip any critical paths or scenarios. Ensure that all scenarios are tested thoroughly to identify any potential issues or defects.

Best Practices

  • Start testing early in the development process to identify and address issues before they become more challenging and expensive to fix.
  • Utilize automated testing whenever possible to save time and increase efficiency.
  • Use real-world data and scenarios to create accurate and relevant test cases.
  • Ensure that all test cases are repeatable and easily reproducible to ensure that results can be verified and tested again if necessary.
  • Continuously review and update the testing plan to ensure that it covers all relevant features and scenarios.
  • Work collaboratively with the O3 team to ensure that any issues or defects are identified and resolved quickly.

Automating the E2E tests with GitHub Actions

Automating end-to-end (E2E) tests through GitHub Actions provides an efficient way to ensure the reliability of software changes.

Dockerized test environment

Our E2E tests are executed in a dockerized environment for each pull request and commit to the O3 repositories. This approach offers several advantages over traditional methods:

  • Reduced Dependency on Dev3 Server: By utilizing a dockerized environment, the E2E tests are not reliant on the status or availability of the dev3 server. This independence ensures that testing can proceed even if the dev3 server is experiencing issues.

  • Isolated Test Runs: Running tests on PRs and commits within isolated docker containers eliminates conflicts between data. This isolation prevents scenarios where testing data conflicts with other ongoing tests or development activities.

  • Minimized Impact of Failures: Failures within the E2E tests do not impact the status or stability of the dev3 server. This separation ensures that the main development environment remains unaffected by testing failures.

Optimization of Testing Process

To enhance the efficiency of the E2E testing process, we have implemented several optimization methods:

  1. Pre-filled Docker Images: The backend and database docker images used for automated e2e testing are pre-filled with necessary data and configurations. This eliminates the need to generate data during the initial setup of the testing environment. Consequently, the setup time is significantly reduced, enabling quick creation of the test instance.

  2. Dynamic Lightweight Frontend: To execute automated tests, a dynamic lightweight version of the frontend is used. This version exclusively includes apps and changes present in the current repository, along with essential esm-apps like the primary navigation app. This frontend image is built during the E2E tests' GitHub Actions workflow.

The pre-filled backend and database docker images, along with the dynamically built lightweight frontend image, are combined within the same docker-compose stack and used for setting up the e2e test environment.

Additional Implementation Details

Docker Image Generation: The pre-filled docker images are created and pushed to the Docker Hub through a dedicated bamboo stage within the REFAPP-D3X (opens in a new tab) bamboo job. This stage involves a script that retrieves the latest versions of both the frontend and backend. It then runs and waits for data generation before building the :nightly-with-data docker images of the backend and database. These images are subsequently pushed to the Docker Hub. More details on this can be found here (opens in a new tab). import sharp from "sharp"