Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infra: handle flaky tests #30501

Merged
merged 2 commits into from
Mar 26, 2025
Merged

infra: handle flaky tests #30501

merged 2 commits into from
Mar 26, 2025

Conversation

ccurme
Copy link
Collaborator

@ccurme ccurme commented Mar 26, 2025

No description provided.

Copy link

vercel bot commented Mar 26, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Mar 26, 2025 5:22pm

@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. 🤖:nit Small modifications/deletions, fixes, deps or improvements to existing code or docs labels Mar 26, 2025
@ccurme ccurme requested a review from Copilot March 26, 2025 17:23
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR addresses flaky test failures by adding a retry mechanism via the pytest flaky marker and updating test dependencies. Key changes include:

  • Adding @pytest.mark.flaky(retries=3, delay=1) decorators to various test functions.
  • Including the pytest-retry dependency in the pyproject.toml files for both the OpenAI and Anthropic partners.
  • Extending flaky test handling to both synchronous and asynchronous test cases.

Reviewed Changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

File Description
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py Added flaky markers to tests to automatically retry on failures.
libs/partners/openai/pyproject.toml Added pytest-retry to support the flaky test markers.
libs/partners/anthropic/tests/integration_tests/test_chat_models.py Added flaky marker for a specific test case.
libs/partners/anthropic/pyproject.toml Added pytest-retry to support flaky test handling.
Comments suppressed due to low confidence (5)

libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py:203

  • Consider reordering decorators so that @pytest.mark.parametrize is applied before @pytest.mark.flaky to ensure that retries are executed for each parameterized instance.
@pytest.mark.flaky(retries=3, delay=1)

libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py:112

  • Confirm that the flaky decorator supports async test functions correctly, as additional handling might be required for asynchronous retries.
@pytest.mark.flaky(retries=3, delay=1)

libs/partners/openai/pyproject.toml:33

  • Verify that the added pytest-retry dependency is compatible with the other pytest plugins in use, ensuring consistent flaky test behavior.
"pytest-retry<1.8.0,>=1.7.0",

libs/partners/anthropic/tests/integration_tests/test_chat_models.py:733

  • Ensure that using the flaky marker is appropriate for this test case and that the retry mechanism functions as expected within the current test setup.
@pytest.mark.flaky(retries=3, delay=1)

libs/partners/anthropic/pyproject.toml:33

  • Double-check that the version constraints for pytest-retry align well with other testing dependencies to avoid potential integration issues.
"pytest-retry<1.8.0,>=1.7.0",

@ccurme ccurme merged commit 422ba4c into master Mar 26, 2025
64 checks passed
@ccurme ccurme deleted the cc/flaky_tests branch March 26, 2025 17:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:nit Small modifications/deletions, fixes, deps or improvements to existing code or docs size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant