Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration test improvements #2858

Merged
merged 11 commits into from
Mar 7, 2024
Merged

Conversation

hubertdeng123
Copy link
Member

@hubertdeng123 hubertdeng123 commented Mar 5, 2024

This PR adds a couple of improvements to our integration test pipeline.

  1. Parallelizes some tests, runtime was 23+ minutes. Now this is decreased to <18 minutes
  2. Reports failures to Sentry project

In order to report flakes, I wanted to cut down the time it takes to actually run these tests. Given that with 20 min tests, it could take an hour if the tests flake twice.

@BYK
Copy link
Member

BYK commented Mar 5, 2024

I don't understand how this is going to help. Can you share your thinking?

@hubertdeng123
Copy link
Member Author

I don't understand how this is going to help. Can you share your thinking?

Please ignore things going on in this branch for now, I am just using this as a means to run CI while I am experimenting

@hubertdeng123 hubertdeng123 changed the title Flakey Tests Fix? Integration test improvements Mar 6, 2024
@@ -41,10 +45,10 @@ echo "${_endgroup}"
echo "${_group}Starting Sentry for tests ..."
# Disable beacon for e2e tests
echo 'SENTRY_BEACON=False' >>$SENTRY_CONFIG_PY
echo y | $dcr web createuser --force-update --superuser --email $TEST_USER --password $TEST_PASS
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker run is slower than just an exec into a running container

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is but it is also intentional to keep the run separate from the main web process.

That said for a one-off thing like this, I think using exec is a good compromise if it saves a notable amount of time. I'd just want it documented with a brief comment.

@hubertdeng123 hubertdeng123 marked this pull request as ready for review March 6, 2024 18:11
@hubertdeng123
Copy link
Member Author

integration test (v2.0.1, /usr/local/lib/docker/cli-plugins) and integration test (v2.7.0, /usr/local/lib/docker/cli-plugins) are expected and pending because of branch protection rules, if this PR goes in I'll change those

uses: nick-fields/retry@v3
with:
max_attempts: 3
timeout_minutes: 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance this could actually increase the flake rate, due to timeouts becoming more frequent? Or are flakes caused by the actions itself timing out?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The typical install logic takes around 4-5 minutes. I doubled that time for the timeout so I don't think this should ever increase the flake rate.

run: ./integration-test.sh
uses: nick-fields/retry@v3
with:
max_attempts: 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just so I understand: what was the behavior before you added these settings? Unlimited retries? No timeout so it hung until the action crashed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests would fail from flakes way too often, adding this in drastically increases the chance the tests pass when they should

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, in effect, the previous (implicit) setting was max_attempts: 1? Or timeout_minutes: Infinity? Or some combination of the two? I get that the problem we are trying to solve is flakiness, I'm just not clear how changing (raising? lowering?) max_attempts and timeout_minutes helps that, since it's not obvious what the current state of affairs is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's correct. The max_attempts really is just the first step into adding flaky test detection. If a job fails, but then is retried and succeed, it can be marked as flaky. The timeout_minutes is a required parameter here. I can remove this and readd in a follow-up PR if that is more clear

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that's fine. I just wanted to understand the change. LGTM!

error_msg="An error occurred, caught SIG$1 on line $2"
echo "$error_msg"
dsn="https://[email protected]/6627632"
local sentry_cli="docker run --rm -v $(pwd):/work -e SENTRY_DSN=$dsn getsentry/sentry-cli"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Odd to me that we set REPORT_ISSUES=0 above, but then send an envelope here anyway? Maybe I am misunderstanding something, but why would we not just do REPORT_ISSUES=1, and maybe figure out some way to configure it to send to an "integration tests" or similar project on our actual dogfood instance?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can incorporate this into the existing logic for REPORT_ISSUES I believe.

timeout 90 bash -c 'until $(curl -Isf -o /dev/null $SENTRY_TEST_HOST); do printf '.'; sleep 0.5; done'
echo y | $dc exec web sentry createuser --force-update --superuser --email $TEST_USER --password $TEST_PASS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if we exec into the web container but it isn't ready yet? Does docker wait until it is ready, or does this fail? If it's the former, we should echo "Waiting for Sentry..." before this runs, otherwise the user may be waiting a while. If it fails, we should add some sort of sync point to wait for the container to be up before trying this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docker compose up will only succeed if the container healthcheck for web passes, so I don't think this will be a problem. That is performed on a previous line, so when the tests get to the createuser logic the web container will always be ready

Copy link
Member

@BYK BYK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love the changes. BTW not sure how much time it costs us but maybe we can also use system-installed versions of jq and curl if we detect them and use the Docker-based ones as a fallback?

@hubertdeng123
Copy link
Member Author

BTW not sure how much time it costs us but maybe we can also use system-installed versions of jq and curl if we detect them and use the Docker-based ones as a fallback?

Good point, I can investigate that in a follow up

@hubertdeng123 hubertdeng123 merged commit 746031d into master Mar 7, 2024
10 checks passed
@hubertdeng123 hubertdeng123 deleted the hubertdeng123/flakey-test-fix branch March 7, 2024 22:49
@github-actions github-actions bot locked and limited conversation to collaborators Mar 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants