Commit Graph

15 Commits

Author SHA1 Message Date
Alan Guo Xiang Tan 2a1952d9ba
DEV: Only retry and log flaky tests on the main branch (#24889)
Why this change?

Pull requests can introduce flaky tests into the mix and we do not want
to be hiing that during the pull request process. While this does mean
builds for PR will be less stable than the `main` branch without
retries, we do not foresee this to be a problem long term since the
monitoring of flaky tests on the `main` branch will mean that the number
of flaky tests will eventually be reduced.

What does this change do?

1. Introduce the `DISCOURSE_TURBO_RSPEC_RETRY_AND_LOG_FLAKY_TESTS` env
   variable which will initialize `TurboTest::Runner` with the `retry_and_log_flaky_tests`
   kwarg set to true when set.

2. Change the tests workflow run to set `DISCOURSE_TURBO_RSPEC_RETRY_AND_LOG_FLAKY_TESTS` only when
   the build type is `backend` or `system` and the `github.ref_name` is
   `main`.
2023-12-14 09:41:30 +08:00
Alan Guo Xiang Tan 39da9106ba
DEV: Introduce automatic reruns to RSpec tests on Github actions (#24811)
What motivated this change?

Our builds on Github actions have been extremely flaky mostly due to system tests. This has led to a drop in confidence
in our test suite where our developers tend to assume that a failed job is due to a flaky system test. As a result, we
have had occurrences where changes that resulted in legitimate test failures are merged into the `main` branch because developers
assumed it was a flaky test.

What does this change do?

This change seeks to reduce the flakiness of our builds on Github Actions by automatically re-running RSpec tests once when
they fail. If a failed test passes subsequently in the re-run, we mark the test as flaky by logging it into a file on disk
which is then uploaded as an artifact of the Github workflow run. We understand that automatically re-runs will lead to 
lower accuracy of our tests but we accept this as an acceptable trade-off since a fragile build has a much greater impact
on our developers' time. Internally, the Discourse development team will be running a service to fetch the flaky tests 
which have been logged for internal monitoring.

How is the change implemented?

1. A `--retry-and-log-flaky-tests` CLI flag is added to the `bin/turbo_rspec` CLI which will then initialize `TurboTests::Runner` 
with the `retry_and_log_flaky_tests` kwarg set to `true`. 

2. When the `retry_and_log_flaky_tests` kwarg is set to `true` for `TurboTests::Runner`, we will register an additional 
formatter `Flaky::FailuresLoggerFormatter` to the `TurboTests::Reporter` in the `TurboTests::Runner#run` method. 
The `Flaky::FailuresLoggerFormatter` has a simple job of logging all failed examples to a file on disk when running all the 
tests. The details of the failed example which are logged can be found in `TurboTests::Flaky::FailedExample.to_h`.

3. Once all the tests have been run once, we check the result for any failed examples and if there are, we read the file on
disk to fetch the `location_rerun_location` of the failed examples which is then used to run the tests in a new RSpec process.
In the rerun, we configure a `TurboTests::Flaky::FlakyDetectorFormatter` with RSpec which removes all failed examples from the log file on disk since those examples are not flaky tests. Note that if there are too many failed examples on the first run, we will deem the failures to likely not be due to flaky tests and not re-run the test failures. As of writing, the threshold of failed examples is set to 10. If there are more than 10 failed examples, we will not re-run the failures.
2023-12-13 07:18:27 +08:00
Daniel Waterworth 70d082584c
DEV: Allow explicitly enabling/disabling system tests in bin/turbo_rspec (#23515)
This doesn't alter the default behavior.
2023-09-11 13:11:06 -05:00
Alan Guo Xiang Tan 5897709a90
DEV: Use runtime info to split test files for parallel testing (#22060)
Using the runtime information, we will be able to more efficiently group
the test files across the test processes hence leading to better
utilization of resources.
2023-06-12 09:07:17 +08:00
Daniel Waterworth 67afd85aae
Revert "DEV: Use runtime info to split test files for parallel testing (#21896)" (#22016)
This reverts commit 14ed971db6.

This prevented the core backend tests from running in GitHub CI
2023-06-08 15:13:26 -05:00
Alan Guo Xiang Tan 14ed971db6
DEV: Use runtime info to split test files for parallel testing (#21896)
Using the runtime information, we will be able to more efficiently group
the test files across the test processes hence leading to better
utilization of resources.
2023-06-05 08:01:41 +08:00
Alan Guo Xiang Tan b00edf3ea0 DEV: Add `--profile=[COUNT]` option for `turbo_rspec`
Why is this change required?

By default, `RSpec` comes with a `--profile=[COUNT]` option as well but
enabling that option means that the entire test suite needs to be
executed. This does not work so well for `turbo_rspec` which splits our
test files into various "buckets" for the tests to be executed in
multiple processes. Therefore, this commit adds a similar
`--profile=[COUNT]` option to `turbo_rspec` but will only profile the
tests being executed. Examples:

`LOAD_PLUGINS=1 bin/turbo_rspec --profile plugins/*/spec/system`

or

`LOAD_PLUGINS=1 bin/turbo_rspec --profile=20 plugins/*/spec/system`
2023-05-30 13:46:14 +09:00
Jarek Radosz bf8939f7ad
DEV: add `--seed` to turbo_rspec, tweak CI output (#21598) 2023-05-17 11:22:31 +02:00
Martin Brennan 4ab1f76499
DEV: Fix bin/turbo_rspec runtime recording (#20407)
This commit 57caf08e13 broke
`bin/turbo_rspec` timing recording via `TurboTests::Runner`,
because we changed to using all `spec/*` folders except
`spec/system` as default for the runner, rather than
the old `['spec']` array, which is what `TurboTests::Runner`
was relying on to determine whether to record test run
time with `ParallelTests::RSpec::RuntimeLogger`.

Instead, we can just pass a new `use_runtime_info` boolean to the
runner class and use it when running against the default set of
spec files using `bin/turbo_rspec` and the turbo rspec rake task.
2023-02-23 07:47:11 +10:00
Martin Brennan 57caf08e13
DEV: Minimal first pass of rails system test setup (#16311)
This commit introduces rails system tests run with chromedriver, selenium,
and headless chrome to our testing toolbox.

We use the `webdrivers` gem and `selenium-webdriver` which is what
the latest Rails uses so the tests run locally and in CI out of the box.

You can use `SELENIUM_VERBOSE_DRIVER_LOGS=1` to show extra
verbose logs of what selenium is doing to communicate with the system
tests.

By default JS logs are verbose so errors from JS are shown when
running system tests, you can disable this with
`SELENIUM_DISABLE_VERBOSE_JS_LOGS=1`

You can use `SELENIUM_HEADLESS=0` to run the system
tests inside a chrome browser instead of headless, which can be useful to debug things
and see what the spec sees. See note above about `bin/ember-cli` to avoid
surprises.

I have modified `bin/turbo_rspec` to exclude `spec/system` by default,
support for parallel system specs is a little shaky right now and we don't
want them slowing down the turbo by default either.

### PageObjects and System Tests

To make querying and inspecting parts of the page easier
and more reusable inbetween system tests, we are using the
concept of [PageObjects](https://www.selenium.dev/documentation/test_practices/encouraged/page_object_models/) in
our system tests. A "Page" here is generally corresponds to
an overarching ember route, e.g. "Topic" for `/t/324345/some-topic`,
and this contains logic for querying components within the topic
such as "Posts".

I have also split "Modals" into their own entity. Further down the
line we may want to explore creating independent "Component"
contexts.

Capybara DSL should be included in each PageObject class,
reference for this can be found at https://rubydoc.info/github/teamcapybara/capybara/master#the-dsl

For system tests, since they are so slow, we want to focus on
the "happy path" and not do every different possible context
and branch check using them. They are meant to be overarching
tests that check a number of things are correct using the full stack
from JS and ember to rails to ruby and then the database.

### CI Setup

Whenever a system spec fails, a screenshot
is taken and a build artifact is produced _after the entire CI run is complete_,
which can be downloaded from the Actions UI in the repo.

Most importantly, a step to build the Ember app using Ember CLI
is needed, otherwise the JS assets cannot be found by capybara:

```
- name: Build Ember CLI
  run: bin/ember-cli --build
```

A new `--build` argument has been added to `bin/ember-cli` for this
case, which is not needed locally if you already have the discourse
rails server running via `bin/ember-cli -u` since the whole server is built and
set up by default.

Co-authored-by: David Taylor <david@taylorhq.com>
2022-09-28 11:48:16 +10:00
Mark VanLandingham 9b4aba0d39
DEV: support --fail-fast in bin/turbo_rspec (#8170)
* [WIP] - default turbo spec env to test

* FEATURE: support for --fast-fail in bin/turbo_rspec

* fast-fail -> fail_fast to match rspec

* Moved thread killing outside of fail-fast check

* Removed failure_count incrementation from fast_fail_met
2019-10-09 09:40:06 -05:00
Daniel Waterworth c3db5925a8 FIX: Turbo tests exit codes 2019-07-09 08:51:23 +01:00
Daniel Waterworth d6aa92e98e DEV: Add a verbose option to ./bin/turbo_rspec 2019-06-27 15:49:21 +01:00
Sam Saffron fc84e23b71 DEV: allow bin/turbo_tests to run tests without params 2019-06-21 11:33:22 +10:00
Daniel Waterworth e18ce56f4b DEV: Add a new way to run specs in parallel with better output (#7778)
* DEV: Add a new way to run specs in parallel with better output

This commit:

 1. adds a new executable, `bin/interleaved_rspec` which works much like
    `rspec`, but runs the tests in parallel.

 2. adds a rake task, `rake interleaved:spec` which runs the whole test
    suite.

 3. makes autospec use this new wrapper by default. You can disable this
    by running `PARALLEL_SPEC=0 rake autospec`.

It works much like the `parallel_tests` gem (and relies on it), but
makes each subprocess use a machine-readable formatter and parses this
output in order to provide a better overall summary.

(It's called interleaved, because parallel was taken and naming is
hard).

* Make popen3 invocation safer

* Use FileUtils instead of shelling out

* DRY up reporter

* Moved summary logic into Reporter

* s/interleaved/turbo/g

* Move Reporter into its own file

* Moved run into its own class

* Moved Runner into its own file

* Move JsonRowsFormatter under TurboTests

* Join on threads at the end

* Acted on feedback from eviltrout
2019-06-21 10:59:01 +10:00