-
Notifications
You must be signed in to change notification settings - Fork 21
test: split integration tests into integration and e2e suites #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
07f21fe
e63343c
8a7eb70
38f5c04
b41cafb
4a0bd80
994e970
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -20,12 +20,119 @@ jobs: | |
| tests_concurrency: "1" | ||
|
|
||
| integration_tests: | ||
| name: Integration tests | ||
| uses: apify/workflows/.github/workflows/python_integration_tests.yaml@main | ||
| secrets: inherit | ||
| with: | ||
| python_versions: '["3.10", "3.14"]' | ||
| operating_systems: '["ubuntu-latest"]' | ||
| python_version_for_codecov: "3.14" | ||
| operating_system_for_codecov: ubuntu-latest | ||
| tests_concurrency: "16" | ||
| name: Integration tests (${{ matrix.python-version }}, ${{ matrix.os }}) | ||
|
|
||
| if: >- | ||
| ${{ | ||
| (github.event_name == 'pull_request' && github.event.pull_request.head.repo.owner.login == 'apify') || | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What problem does the
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. They don't, thanks to this condition (they are skipped). Otherwise, they would be executed, and they would fail. |
||
| (github.event_name == 'push' && github.ref == 'refs/heads/master') | ||
| }} | ||
|
|
||
| strategy: | ||
| matrix: | ||
| os: ["ubuntu-latest"] | ||
| python-version: ["3.10", "3.14"] | ||
|
|
||
| runs-on: ${{ matrix.os }} | ||
|
|
||
| env: | ||
| TESTS_CONCURRENCY: "16" | ||
|
|
||
| steps: | ||
| - name: Checkout repository | ||
| uses: actions/checkout@v6 | ||
|
|
||
| - name: Set up Python ${{ matrix.python-version }} | ||
| uses: actions/setup-python@v6 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
|
|
||
| - name: Set up uv package manager | ||
| uses: astral-sh/setup-uv@v7 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
|
|
||
| - name: Install Python dependencies | ||
| run: uv run poe install-dev | ||
|
|
||
| - name: Run integration tests | ||
| run: uv run poe integration-tests-cov | ||
| env: | ||
| APIFY_TEST_USER_API_TOKEN: ${{ secrets.APIFY_TEST_USER_PYTHON_SDK_API_TOKEN }} | ||
| APIFY_TEST_USER_2_API_TOKEN: ${{ secrets.APIFY_TEST_USER_2_API_TOKEN }} | ||
|
|
||
| - name: Upload integration test coverage | ||
| if: >- | ||
| ${{ | ||
| matrix.os == 'ubuntu-latest' && | ||
| matrix.python-version == '3.14' && | ||
| env.CODECOV_TOKEN != '' | ||
| }} | ||
| uses: codecov/codecov-action@v5 | ||
| env: | ||
| CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} | ||
| with: | ||
| token: ${{ env.CODECOV_TOKEN }} | ||
| files: coverage-integration.xml | ||
| flags: integration | ||
|
|
||
| e2e_tests: | ||
| name: E2E tests (${{ matrix.python-version }}, ${{ matrix.os }}) | ||
|
|
||
| if: >- | ||
| ${{ | ||
| (github.event_name == 'pull_request' && github.event.pull_request.head.repo.owner.login == 'apify') || | ||
| (github.event_name == 'push' && github.ref == 'refs/heads/master') | ||
| }} | ||
|
|
||
| strategy: | ||
| # E2E tests build and run Actors on the platform. Limit parallel workflows to 1 to avoid exceeding | ||
| # the platform's memory limits. A single workflow with 16 pytest workers provides good test | ||
| # parallelization while staying within platform constraints. | ||
| max-parallel: 1 | ||
| matrix: | ||
| os: ["ubuntu-latest"] | ||
| python-version: ["3.10", "3.14"] | ||
|
|
||
| runs-on: ${{ matrix.os }} | ||
|
|
||
| env: | ||
| TESTS_CONCURRENCY: "16" | ||
|
|
||
| steps: | ||
| - name: Checkout repository | ||
| uses: actions/checkout@v6 | ||
|
|
||
| - name: Set up Python ${{ matrix.python-version }} | ||
| uses: actions/setup-python@v6 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
|
|
||
| - name: Set up uv package manager | ||
| uses: astral-sh/setup-uv@v7 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
|
|
||
| - name: Install Python dependencies | ||
| run: uv run poe install-dev | ||
|
|
||
| - name: Run E2E tests | ||
| run: uv run poe e2e-tests-cov | ||
| env: | ||
| APIFY_TEST_USER_API_TOKEN: ${{ secrets.APIFY_TEST_USER_PYTHON_SDK_API_TOKEN }} | ||
| APIFY_TEST_USER_2_API_TOKEN: ${{ secrets.APIFY_TEST_USER_2_API_TOKEN }} | ||
|
|
||
| - name: Upload E2E test coverage | ||
| if: >- | ||
| ${{ | ||
| matrix.os == 'ubuntu-latest' && | ||
| matrix.python-version == '3.14' && | ||
| env.CODECOV_TOKEN != '' | ||
| }} | ||
| uses: codecov/codecov-action@v5 | ||
| env: | ||
| CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} | ||
| with: | ||
| token: ${{ env.CODECOV_TOKEN }} | ||
| files: coverage-e2e.xml | ||
| flags: e2e | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,100 @@ | ||
| # E2E tests | ||
|
|
||
| These tests build and run Actors using the Python SDK on the Apify platform. They are slower than integration tests (see [`tests/integration/`](../integration/)) because they need to build and deploy Actors. | ||
|
|
||
| When writing new tests, prefer integration tests if possible. Only write E2E tests when you need to test something that requires building and running an Actor on the platform. | ||
|
|
||
| ## Running | ||
|
|
||
| ```bash | ||
| export APIFY_TEST_USER_API_TOKEN=<your-token> | ||
| uv run poe e2e-tests | ||
| ``` | ||
|
|
||
| To run against a different environment, also set `APIFY_INTEGRATION_TESTS_API_URL`. | ||
|
|
||
| ## Key fixtures | ||
|
|
||
| - **`apify_client_async`** — A session-scoped `ApifyClientAsync` instance configured with the test token and API URL. | ||
| - **`prepare_test_env`** / **`_isolate_test_environment`** (autouse) — Resets global state and sets `APIFY_LOCAL_STORAGE_DIR` to a temporary directory before each test. | ||
| - **`make_actor`** — Factory for creating temporary Actors on the Apify platform (built, then auto-deleted after the test). | ||
| - **`run_actor`** — Starts an Actor run and waits for completion (10 min timeout). | ||
|
|
||
| ## How to write tests | ||
|
|
||
| ### Creating an Actor from a Python function | ||
|
|
||
| You can create Actors straight from a Python function. This is great because the test Actor source code gets checked by the linter. | ||
|
|
||
| ```python | ||
| async def test_something( | ||
| make_actor: MakeActorFunction, | ||
| run_actor: RunActorFunction, | ||
| ) -> None: | ||
| async def main() -> None: | ||
| async with Actor: | ||
| print('Hello!') | ||
|
|
||
| actor = await make_actor(label='something', main_func=main) | ||
| run_result = await run_actor(actor) | ||
|
|
||
| assert run_result.status == 'SUCCEEDED' | ||
| ``` | ||
|
|
||
| The `src/main.py` file will be set to the function definition, prepended with `import asyncio` and `from apify import Actor`. You can add extra imports directly inside the function body. | ||
|
|
||
| ### Creating an Actor from source files | ||
|
|
||
| Pass the `main_py` argument for a single-file Actor: | ||
|
|
||
| ```python | ||
| async def test_something( | ||
| make_actor: MakeActorFunction, | ||
| run_actor: RunActorFunction, | ||
| ) -> None: | ||
| expected_output = f'ACTOR_OUTPUT_{crypto_random_object_id(5)}' | ||
| main_py_source = f""" | ||
| import asyncio | ||
| from datetime import datetime | ||
| from apify import Actor | ||
| async def main(): | ||
| async with Actor: | ||
| await Actor.set_value('OUTPUT', '{expected_output}') | ||
| """ | ||
|
|
||
| actor = await make_actor(label='something', main_py=main_py_source) | ||
| await run_actor(actor) | ||
|
|
||
| output_record = await actor.last_run().key_value_store().get_record('OUTPUT') | ||
| assert output_record is not None | ||
| assert output_record['value'] == expected_output | ||
| ``` | ||
|
|
||
| Or pass `source_files` for multi-file Actors: | ||
|
|
||
| ```python | ||
| actor_source_files = { | ||
| 'src/utils.py': """ | ||
| from datetime import datetime, timezone | ||
| def get_current_datetime(): | ||
| return datetime.now(timezone.utc) | ||
| """, | ||
| 'src/main.py': """ | ||
| import asyncio | ||
| from apify import Actor | ||
| from .utils import get_current_datetime | ||
| async def main(): | ||
| async with Actor: | ||
| print('Hello! It is ' + str(get_current_datetime())) | ||
| """, | ||
| } | ||
| actor = await make_actor(label='something', source_files=actor_source_files) | ||
| ``` | ||
|
|
||
| ### Assertions inside Actors | ||
|
|
||
| Since test Actors are not executed as standard pytest tests, we don't get introspection of assertion expressions. In case of failure, only a bare `AssertionError` is shown. Always include explicit assertion messages: | ||
|
|
||
| ```python | ||
| assert is_finished is False, f'is_finished={is_finished}' | ||
| ``` |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,17 @@ | ||
| from __future__ import annotations | ||
|
|
||
| from crawlee._utils.crypto import crypto_random_object_id | ||
|
|
||
|
|
||
| def generate_unique_resource_name(label: str) -> str: | ||
| """Generates a unique resource name, which will contain the given label.""" | ||
| name_template = 'python-sdk-tests-{}-generated-{}' | ||
| template_length = len(name_template.format('', '')) | ||
| api_name_limit = 63 | ||
| generated_random_id_length = 8 | ||
| label_length_limit = api_name_limit - template_length - generated_random_id_length | ||
|
|
||
| label = label.replace('_', '-') | ||
| assert len(label) <= label_length_limit, f'Max label length is {label_length_limit}, but got {len(label)}' | ||
|
|
||
| return name_template.format(label, crypto_random_object_id(generated_random_id_length)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think these braces are necessary in this context, but I may be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is copied from the
apify/workflows, and we know it works, so I would probably stay with that 🙂.