[{"data":1,"prerenderedAt":1082},["ShallowReactive",2],{"page-\u002Fadvanced-pytest-architecture-configuration\u002Foptimizing-test-discovery\u002Fpytest-xdist-vs-pytest-parallel-performance-comparison\u002F":3},{"id":4,"title":5,"body":6,"description":1075,"extension":1076,"meta":1077,"navigation":245,"path":1078,"seo":1079,"stem":1080,"__hash__":1081},"content\u002Fadvanced-pytest-architecture-configuration\u002Foptimizing-test-discovery\u002Fpytest-xdist-vs-pytest-parallel-performance-comparison\u002Findex.md","pytest-xdist vs pytest-parallel Performance Comparison",{"type":7,"value":8,"toc":1068},"minimark",[9,13,22,54,67,72,107,126,138,162,166,181,196,199,295,298,365,380,416,420,423,455,484,490,524,527,556,561,602,606,616,636,868,901,932,936,966,1001,1031,1064],[10,11,5],"h1",{"id":12},"pytest-xdist-vs-pytest-parallel-performance-comparison",[14,15,16,17,21],"p",{},"Selecting the correct parallel execution engine for a mature pytest suite requires moving beyond superficial benchmark numbers and understanding the underlying execution semantics, serialization boundaries, and hook interception models. The ",[18,19,20],"code",{},"pytest-xdist vs pytest-parallel performance comparison"," ultimately resolves to a trade-off between strict process isolation and lightweight concurrency overhead. Both plugins fundamentally alter pytest’s default sequential execution loop, but they achieve this through divergent architectural paradigms that dictate their suitability for CI\u002FCD pipelines, local development workflows, and complex test topologies.",[14,23,24,25,28,29,32,33,38,39,42,43,46,47,28,50,53],{},"At the architectural layer, both runners intercept core pytest phases such as ",[18,26,27],{},"pytest_collection_modifyitems"," and ",[18,30,31],{},"pytest_runtestloop",". Understanding how these hooks are overridden is critical when evaluating ",[34,35,37],"a",{"href":36},"\u002Fadvanced-pytest-architecture-configuration\u002F","Advanced Pytest Architecture & Configuration"," patterns, particularly when custom plugins rely on deterministic execution ordering or shared state initialization. ",[18,40,41],{},"pytest-xdist"," delegates test distribution to an external gateway layer, spawning independent Python interpreter instances that communicate via socket-based RPC. Conversely, ",[18,44,45],{},"pytest-parallel"," operates closer to the host process, leveraging Python’s standard ",[18,48,49],{},"multiprocessing",[18,51,52],{},"concurrent.futures"," modules to distribute workloads. This foundational divergence dictates memory footprint, fixture scoping behavior, and serialization constraints.",[14,55,56,57,59,60,63,64,66],{},"For production-grade test matrices, the decision matrix should be driven by workload characteristics rather than raw thread counts. CPU-bound suites with heavy fixture initialization, database migrations, or expensive mock setups typically benefit from ",[18,58,41],{}," due to its strict process boundaries and ",[18,61,62],{},"--dist loadscope"," scheduling algorithm, which groups tests by module or class to minimize fixture teardown\u002Frecreation overhead. I\u002FO-bound, network-heavy, or lightweight unit tests often execute faster under ",[18,65,45],{}," because it avoids the interpreter duplication penalty and leverages thread pools where the GIL is released during blocking I\u002FO operations. However, as concurrency scales, both runners expose distinct failure modes that require targeted profiling and configuration tuning. Engineers must evaluate not only wall-clock reduction but also memory pressure, coverage fragmentation, and deterministic reproducibility before committing to a parallelization strategy.",[68,69,71],"h2",{"id":70},"core-architectural-differences-execution-models","Core Architectural Differences & Execution Models",[14,73,74,75,28,77,79,80,82,83,86,87,90,91,94,95,98,99,102,103,106],{},"The execution model divergence between ",[18,76,41],{},[18,78,45],{}," stems from their underlying concurrency primitives and inter-process communication (IPC) strategies. ",[18,81,41],{}," relies on ",[18,84,85],{},"execnet",", a lightweight distributed execution library that establishes gateways via ",[18,88,89],{},"popen",", ",[18,92,93],{},"ssh",", or ",[18,96,97],{},"socket"," channels. Each worker runs a completely isolated Python interpreter, loading the test suite independently. Test collection occurs either centrally or per-worker depending on the ",[18,100,101],{},"--dist"," mode, with results serialized and transmitted back to the master node via pickled objects over the established channel. This architecture guarantees absolute memory isolation, preventing cross-worker contamination, but introduces significant baseline overhead: each worker incurs the full cost of interpreter startup, module importation, and ",[18,104,105],{},"conftest.py"," evaluation.",[14,108,109,111,112,115,116,119,120,122,123,125],{},[18,110,45],{}," takes a fundamentally different approach by utilizing ",[18,113,114],{},"multiprocessing.Pool"," for process-based execution and ",[18,117,118],{},"concurrent.futures.ThreadPoolExecutor"," for thread-based execution. In thread mode, workers share the same memory space and interpreter instance, bypassing process spawn latency entirely. This yields near-instantaneous startup times and minimal memory overhead, making it highly efficient for suites dominated by network requests, file I\u002FO, or database queries where the GIL is frequently released. However, thread mode inherits all standard Python threading limitations: CPU-bound workloads suffer from GIL contention, and any test that modifies global state, patches built-ins, or relies on thread-unsafe C extensions will exhibit non-deterministic failures. Process mode in ",[18,121,45],{}," mitigates GIL constraints but relies on standard ",[18,124,49],{}," queues, which enforce strict pickling requirements for all test arguments and fixture return values.",[14,127,128,129,131,132,134,135,137],{},"Worker isolation directly impacts fixture scoping semantics. Under ",[18,130,41],{},", module-scoped fixtures are instantiated once per worker, not once per test run. This can lead to unexpected behavior if tests assume a single shared resource across the entire suite. ",[18,133,45],{}," in thread mode shares module-scoped fixtures across all threads, which can cause race conditions if fixtures are not explicitly thread-safe. In process mode, it mirrors ",[18,136,41],{}," behavior but with less robust IPC for complex objects.",[14,139,140,141,145,146,148,149,151,152,28,155,158,159,161],{},"Test collection overhead scales non-linearly as worker counts increase. When running hundreds of workers, the master process must serialize and distribute test node IDs, which becomes a bottleneck if the collection phase is not optimized. Strategies for reducing this latency, such as caching collection results, filtering by marker, or leveraging nodeid hashing, are extensively documented in ",[34,142,144],{"href":143},"\u002Fadvanced-pytest-architecture-configuration\u002Foptimizing-test-discovery\u002F","Optimizing Test Discovery",". Beyond eight cores, ",[18,147,41],{}," typically outperforms ",[18,150,45],{}," in CPU-bound scenarios because its ",[18,153,154],{},"loadscope",[18,156,157],{},"loadfile"," distribution algorithms minimize redundant fixture execution, whereas ",[18,160,45],{},"’s simpler round-robin or queue-based distribution can lead to uneven workloads and increased idle time across workers.",[68,163,165],{"id":164},"benchmarking-methodology-profiling-setup","Benchmarking Methodology & Profiling Setup",[14,167,168,169,172,173,176,177,180],{},"Establishing a reproducible benchmarking harness requires isolating variables and measuring execution characteristics beyond simple wall-clock time. A rigorous evaluation must account for CPU-bound computation, I\u002FO-bound latency, memory allocation patterns, and worker spawn overhead. The following methodology leverages ",[18,170,171],{},"pytest-benchmark"," for statistical timing, ",[18,174,175],{},"memory_profiler"," for RSS tracking, and ",[18,178,179],{},"cProfile"," for call-graph analysis.",[14,182,183,184,187,188,191,192,195],{},"Begin by defining controlled test scenarios. Create separate test modules for CPU-bound operations (e.g., cryptographic hashing, matrix multiplication), I\u002FO-bound operations (e.g., ",[18,185,186],{},"time.sleep",", HTTP requests via ",[18,189,190],{},"responses"," or ",[18,193,194],{},"aioresponses","), and mixed workloads involving database connections or file system operations. Ensure all network and external dependencies are strictly mocked to eliminate infrastructure variance.",[14,197,198],{},"The following minimal reproducible harness demonstrates how to structure overhead comparisons. It isolates execution time, verifies process isolation, and provides a baseline for profiling:",[200,201,206],"pre",{"className":202,"code":203,"language":204,"meta":205,"style":205},"language-python shiki shiki-themes github-light github-dark","# benchmark_harness.py\nimport pytest\nimport time\nimport os\nfrom memory_profiler import profile\n\n@pytest.mark.parametrize('runner', ['xdist', 'parallel'])\ndef test_execution_overhead(runner):\n # Simulate mixed CPU\u002FI\u002FO workload\n time.sleep(0.01)\n _ = sum(range(10000))\n \n # Verify worker isolation (pid != parent pid)\n assert os.getpid() != os.getppid()\n","python","",[18,207,208,216,222,228,234,240,247,253,259,265,271,277,283,289],{"__ignoreMap":205},[209,210,213],"span",{"class":211,"line":212},"line",1,[209,214,215],{},"# benchmark_harness.py\n",[209,217,219],{"class":211,"line":218},2,[209,220,221],{},"import pytest\n",[209,223,225],{"class":211,"line":224},3,[209,226,227],{},"import time\n",[209,229,231],{"class":211,"line":230},4,[209,232,233],{},"import os\n",[209,235,237],{"class":211,"line":236},5,[209,238,239],{},"from memory_profiler import profile\n",[209,241,243],{"class":211,"line":242},6,[209,244,246],{"emptyLinePlaceholder":245},true,"\n",[209,248,250],{"class":211,"line":249},7,[209,251,252],{},"@pytest.mark.parametrize('runner', ['xdist', 'parallel'])\n",[209,254,256],{"class":211,"line":255},8,[209,257,258],{},"def test_execution_overhead(runner):\n",[209,260,262],{"class":211,"line":261},9,[209,263,264],{}," # Simulate mixed CPU\u002FI\u002FO workload\n",[209,266,268],{"class":211,"line":267},10,[209,269,270],{}," time.sleep(0.01)\n",[209,272,274],{"class":211,"line":273},11,[209,275,276],{}," _ = sum(range(10000))\n",[209,278,280],{"class":211,"line":279},12,[209,281,282],{}," \n",[209,284,286],{"class":211,"line":285},13,[209,287,288],{}," # Verify worker isolation (pid != parent pid)\n",[209,290,292],{"class":211,"line":291},14,[209,293,294],{}," assert os.getpid() != os.getppid()\n",[14,296,297],{},"To execute benchmarks, run the suite with both runners while capturing metrics:",[200,299,303],{"className":300,"code":301,"language":302,"meta":205,"style":205},"language-bash shiki shiki-themes github-light github-dark","# pytest-xdist baseline\npytest benchmark_harness.py -n auto --dist loadscope --benchmark-only --benchmark-save=xdist\n\n# pytest-parallel baseline\npytest benchmark_harness.py --workers auto --benchmark-only --benchmark-save=parallel\n","bash",[18,304,305,311,340,344,349],{"__ignoreMap":205},[209,306,307],{"class":211,"line":212},[209,308,310],{"class":309},"sJ8bj","# pytest-xdist baseline\n",[209,312,313,317,321,325,328,331,334,337],{"class":211,"line":218},[209,314,316],{"class":315},"sScJk","pytest",[209,318,320],{"class":319},"sZZnC"," benchmark_harness.py",[209,322,324],{"class":323},"sj4cs"," -n",[209,326,327],{"class":319}," auto",[209,329,330],{"class":323}," --dist",[209,332,333],{"class":319}," loadscope",[209,335,336],{"class":323}," --benchmark-only",[209,338,339],{"class":323}," --benchmark-save=xdist\n",[209,341,342],{"class":211,"line":224},[209,343,246],{"emptyLinePlaceholder":245},[209,345,346],{"class":211,"line":230},[209,347,348],{"class":309},"# pytest-parallel baseline\n",[209,350,351,353,355,358,360,362],{"class":211,"line":236},[209,352,316],{"class":315},[209,354,320],{"class":319},[209,356,357],{"class":323}," --workers",[209,359,327],{"class":319},[209,361,336],{"class":323},[209,363,364],{"class":323}," --benchmark-save=parallel\n",[14,366,367,368,191,371,373,374,376,377,379],{},"For memory profiling, wrap the test execution with ",[18,369,370],{},"tracemalloc",[18,372,175],{}," to capture peak RSS per worker. ",[18,375,41],{}," typically exhibits a higher baseline memory footprint due to full interpreter duplication. Each worker loads the entire test suite into memory, which can exceed 500MB per worker for large codebases. ",[18,378,45],{}," in thread mode shares the interpreter heap, reducing peak RSS by 60-80%, but requires careful monitoring for memory leaks that compound across threads.",[14,381,382,383,385,386,389,390,393,394,397,398,28,400,397,403,405,406,408,409,411,412,415],{},"Use ",[18,384,179],{}," to identify serialization bottlenecks. Run with ",[18,387,388],{},"pytest --profile-svg"," to generate flame graphs highlighting time spent in ",[18,391,392],{},"pytest_runtest_protocol"," versus IPC serialization. Pay particular attention to ",[18,395,396],{},"multiprocessing.reduction"," calls in ",[18,399,45],{},[18,401,402],{},"execnet.remote",[18,404,41],{},". High serialization overhead often indicates non-picklable fixtures or excessive test parametrization. To isolate spawn latency, measure the delta between ",[18,407,27],{}," completion and the first test execution using ",[18,410,316],{},"'s internal timing hooks or ",[18,413,414],{},"pytest-profiling",". This metric reveals how quickly the runner can distribute work, which is critical for short-running test suites where overhead dominates total execution time.",[68,417,419],{"id":418},"edge-case-resolution-common-failure-modes","Edge-Case Resolution & Common Failure Modes",[14,421,422],{},"Parallel execution introduces failure modes that rarely manifest in sequential runs. The most pervasive issues stem from shared state contamination, serialization boundaries, and plugin hook incompatibilities. Rapid diagnosis requires systematic isolation of worker crashes, fixture scope leaks, and desynchronization in property-based testing.",[14,424,425,429,430,28,432,434,435,438,439,442,443,446,447,450,451,454],{},[426,427,428],"strong",{},"Fixture Scope Leaks & Shared State Contamination:"," Module-scoped and session-scoped fixtures are instantiated per-worker in ",[18,431,41],{},[18,433,45],{}," (process mode). If a fixture initializes a mutable global object (e.g., ",[18,436,437],{},"requests.Session",", database connection pool, or singleton cache), tests may inadvertently share state across workers if the fixture is not properly isolated. This manifests as flaky failures that disappear when running with ",[18,440,441],{},"-n 1",". Diagnosis workflow: execute ",[18,444,445],{},"pytest --setup-show -n auto"," to visualize fixture instantiation and teardown order. If multiple tests reference the same fixture instance across different node IDs, refactor the fixture to use ",[18,448,449],{},"scope=\"function\""," or implement explicit worker isolation via ",[18,452,453],{},"os.getpid()"," hashing.",[14,456,457,460,461,463,464,467,468,470,471,473,474,477,478,480,481,483],{},[426,458,459],{},"Thread-Safety Violations & Conftest Serialization Errors:"," ",[18,462,45],{}," in thread mode executes tests within the same interpreter, meaning any test that modifies global state, patches ",[18,465,466],{},"builtins",", or uses non-reentrant C extensions will cause race conditions. ",[18,469,45],{}," also struggles with ",[18,472,105],{}," files containing dynamically generated fixtures or closures that cannot be pickled. When the multiprocessing queue attempts to serialize test arguments, it raises ",[18,475,476],{},"TypeError: cannot pickle 'function' object",". ",[18,479,41],{}," bypasses this via ",[18,482,85],{},"'s custom serialization layer, which handles more complex Python objects but still fails on unpicklable C extensions or file descriptors.",[14,485,486,489],{},[426,487,488],{},"Hypothesis Stateful Testing Desync:"," Hypothesis relies on a local database to track and minimize failing examples. Under parallel execution, workers generate independent example sets, leading to duplicated work and inconsistent failure reproduction. To prevent cross-worker example duplication, explicitly configure the database to use a shared directory:",[200,491,493],{"className":202,"code":492,"language":204,"meta":205,"style":205},"from hypothesis import settings\nfrom hypothesis.database import DirectoryDatabase\n\n@settings(database=DirectoryDatabase(\".hypothesis\u002Fexamples\"))\ndef test_stateful_workflow():\n ...\n",[18,494,495,500,505,509,514,519],{"__ignoreMap":205},[209,496,497],{"class":211,"line":212},[209,498,499],{},"from hypothesis import settings\n",[209,501,502],{"class":211,"line":218},[209,503,504],{},"from hypothesis.database import DirectoryDatabase\n",[209,506,507],{"class":211,"line":224},[209,508,246],{"emptyLinePlaceholder":245},[209,510,511],{"class":211,"line":230},[209,512,513],{},"@settings(database=DirectoryDatabase(\".hypothesis\u002Fexamples\"))\n",[209,515,516],{"class":211,"line":236},[209,517,518],{},"def test_stateful_workflow():\n",[209,520,521],{"class":211,"line":242},[209,522,523],{}," ...\n",[14,525,526],{},"Without this configuration, each worker maintains an isolated database, defeating Hypothesis's shrinking and caching mechanisms.",[14,528,529,532,533,536,537,540,541,544,545,548,549,551,552,555],{},[426,530,531],{},"Plugin Hook Incompatibilities:"," Many pytest plugins assume sequential execution. ",[18,534,535],{},"pytest-cov",", for example, aggregates coverage data in-memory before writing to disk. Under parallel execution, workers overwrite each other's ",[18,538,539],{},".coverage"," files, resulting in fragmented reports. Use ",[18,542,543],{},"--cov-append"," and run ",[18,546,547],{},"coverage combine"," post-execution. For ",[18,550,41],{},", leverage ",[18,553,554],{},"--cov-context"," to map worker PIDs to source files, ensuring accurate branch coverage.",[14,557,558],{},[426,559,560],{},"Rapid Diagnosis Checklist:",[562,563,564,572,578,589,595],"ol",{},[565,566,567,568,571],"li",{},"Run ",[18,569,570],{},"pytest --trace-config"," to verify plugin loading order and hook registration across workers.",[565,573,382,574,577],{},[18,575,576],{},"pytest --collect-only -q"," to ensure node IDs are stable and not dynamically generated per run.",[565,579,580,581,584,585,588],{},"Enable ",[18,582,583],{},"--forked"," (via ",[18,586,587],{},"pytest-forked",") to isolate tests that leak global state.",[565,590,591,592,594],{},"Profile with ",[18,593,370],{}," to detect memory leaks in long-running workers.",[565,596,597,598,601],{},"Set ",[18,599,600],{},"--max-worker-restart=3"," to prevent infinite crash loops during CI execution.",[68,603,605],{"id":604},"cicd-pipeline-integration-resource-tuning","CI\u002FCD Pipeline Integration & Resource Tuning",[14,607,608,609,191,612,615],{},"Integrating parallel test runners into CI\u002FCD pipelines requires dynamic resource allocation, artifact merging, and OS-level tuning. Static worker counts (",[18,610,611],{},"-n 4",[18,613,614],{},"--workers 4",") lead to resource starvation on underpowered runners and underutilization on high-core instances. Modern CI environments expose CPU topology via environment variables, enabling adaptive worker allocation.",[14,617,618,619,621,622,625,626,90,628,631,632,635],{},"The optimal configuration depends on the runner's architecture. For ",[18,620,41],{},", use ",[18,623,624],{},"-n auto"," to detect available CPUs, but cap it to prevent memory exhaustion. For ",[18,627,45],{},[18,629,630],{},"--workers auto"," scales threads\u002Fprocesses, but thread mode should be explicitly disabled for CPU-bound suites using ",[18,633,634],{},"--workers-process",". In GitHub Actions, leverage matrix strategies to test multiple concurrency levels and select the optimal worker count based on historical execution data.",[200,637,641],{"className":638,"code":639,"language":640,"meta":205,"style":205},"language-yaml shiki shiki-themes github-light github-dark","# .github\u002Fworkflows\u002Fpytest-ci.yml\njobs:\n test:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n workers: [2, 4, 8]\n steps:\n - uses: actions\u002Fcheckout@v4\n - name: Setup Python\n uses: actions\u002Fsetup-python@v5\n with:\n python-version: '3.11'\n - name: Install Dependencies\n run: pip install pytest pytest-xdist pytest-benchmark memory-profiler\n - name: Run Parallel Tests\n run: pytest -n ${{ matrix.workers }} --dist loadscope --junitxml=report-${{ matrix.workers }}.xml\n - name: Upload Results\n uses: actions\u002Fupload-artifact@v4\n with:\n name: test-reports-${{ matrix.workers }}\n path: report-*.xml\n","yaml",[18,642,643,648,658,665,676,683,690,714,721,734,746,756,763,773,784,795,807,817,829,839,846,857],{"__ignoreMap":205},[209,644,645],{"class":211,"line":212},[209,646,647],{"class":309},"# .github\u002Fworkflows\u002Fpytest-ci.yml\n",[209,649,650,654],{"class":211,"line":218},[209,651,653],{"class":652},"s9eBZ","jobs",[209,655,657],{"class":656},"sVt8B",":\n",[209,659,660,663],{"class":211,"line":224},[209,661,662],{"class":652}," test",[209,664,657],{"class":656},[209,666,667,670,673],{"class":211,"line":230},[209,668,669],{"class":652}," runs-on",[209,671,672],{"class":656},": ",[209,674,675],{"class":319},"ubuntu-latest\n",[209,677,678,681],{"class":211,"line":236},[209,679,680],{"class":652}," strategy",[209,682,657],{"class":656},[209,684,685,688],{"class":211,"line":242},[209,686,687],{"class":652}," matrix",[209,689,657],{"class":656},[209,691,692,695,698,701,703,706,708,711],{"class":211,"line":249},[209,693,694],{"class":652}," workers",[209,696,697],{"class":656},": [",[209,699,700],{"class":323},"2",[209,702,90],{"class":656},[209,704,705],{"class":323},"4",[209,707,90],{"class":656},[209,709,710],{"class":323},"8",[209,712,713],{"class":656},"]\n",[209,715,716,719],{"class":211,"line":255},[209,717,718],{"class":652}," steps",[209,720,657],{"class":656},[209,722,723,726,729,731],{"class":211,"line":261},[209,724,725],{"class":656}," - ",[209,727,728],{"class":652},"uses",[209,730,672],{"class":656},[209,732,733],{"class":319},"actions\u002Fcheckout@v4\n",[209,735,736,738,741,743],{"class":211,"line":267},[209,737,725],{"class":656},[209,739,740],{"class":652},"name",[209,742,672],{"class":656},[209,744,745],{"class":319},"Setup Python\n",[209,747,748,751,753],{"class":211,"line":273},[209,749,750],{"class":652}," uses",[209,752,672],{"class":656},[209,754,755],{"class":319},"actions\u002Fsetup-python@v5\n",[209,757,758,761],{"class":211,"line":279},[209,759,760],{"class":652}," with",[209,762,657],{"class":656},[209,764,765,768,770],{"class":211,"line":285},[209,766,767],{"class":652}," python-version",[209,769,672],{"class":656},[209,771,772],{"class":319},"'3.11'\n",[209,774,775,777,779,781],{"class":211,"line":291},[209,776,725],{"class":656},[209,778,740],{"class":652},[209,780,672],{"class":656},[209,782,783],{"class":319},"Install Dependencies\n",[209,785,787,790,792],{"class":211,"line":786},15,[209,788,789],{"class":652}," run",[209,791,672],{"class":656},[209,793,794],{"class":319},"pip install pytest pytest-xdist pytest-benchmark memory-profiler\n",[209,796,798,800,802,804],{"class":211,"line":797},16,[209,799,725],{"class":656},[209,801,740],{"class":652},[209,803,672],{"class":656},[209,805,806],{"class":319},"Run Parallel Tests\n",[209,808,810,812,814],{"class":211,"line":809},17,[209,811,789],{"class":652},[209,813,672],{"class":656},[209,815,816],{"class":319},"pytest -n ${{ matrix.workers }} --dist loadscope --junitxml=report-${{ matrix.workers }}.xml\n",[209,818,820,822,824,826],{"class":211,"line":819},18,[209,821,725],{"class":656},[209,823,740],{"class":652},[209,825,672],{"class":656},[209,827,828],{"class":319},"Upload Results\n",[209,830,832,834,836],{"class":211,"line":831},19,[209,833,750],{"class":652},[209,835,672],{"class":656},[209,837,838],{"class":319},"actions\u002Fupload-artifact@v4\n",[209,840,842,844],{"class":211,"line":841},20,[209,843,760],{"class":652},[209,845,657],{"class":656},[209,847,849,852,854],{"class":211,"line":848},21,[209,850,851],{"class":652}," name",[209,853,672],{"class":656},[209,855,856],{"class":319},"test-reports-${{ matrix.workers }}\n",[209,858,860,863,865],{"class":211,"line":859},22,[209,861,862],{"class":652}," path",[209,864,672],{"class":656},[209,866,867],{"class":319},"report-*.xml\n",[14,869,870,871,873,874,876,877,880,881,28,884,887,888,890,891,894,895,897,898,900],{},"Coverage report merging requires explicit configuration to prevent data loss. When using ",[18,872,535],{}," with parallel runners, each worker writes to a separate ",[18,875,539],{}," file. Configure ",[18,878,879],{},".coveragerc"," with ",[18,882,883],{},"parallel = True",[18,885,886],{},"data_file = .coverage"," to enable automatic suffixing. Post-execution, run ",[18,889,547],{}," followed by ",[18,892,893],{},"coverage report"," to generate unified metrics. For ",[18,896,41],{},", the ",[18,899,554],{}," flag appends worker identifiers to coverage branches, enabling precise attribution of missed lines to specific test modules.",[14,902,903,904,907,908,911,912,915,916,191,919,921,922,897,924,927,928,931],{},"OS-level resource limits frequently cause silent worker crashes under high concurrency. Linux enforces strict file descriptor limits (",[18,905,906],{},"ulimit -n","), which are exhausted when workers open database connections, sockets, or temporary files simultaneously. Increase the limit to ",[18,909,910],{},"65536"," in CI runners and tune ",[18,913,914],{},"--max-worker-restart"," to allow graceful recovery from transient crashes. Monitor worker memory using ",[18,917,918],{},"psutil",[18,920,370],{}," to detect RSS spikes that trigger OOM kills. For ",[18,923,41],{},[18,925,926],{},"--tx"," flag allows explicit worker specification (e.g., ",[18,929,930],{},"--tx 4*popen","), which is useful for distributed CI environments where workers run on separate machines.",[68,933,935],{"id":934},"frequently-asked-questions-edge-case-focus","Frequently Asked Questions (Edge-Case Focus)",[14,937,938,941,943,944,946,947,950,951,953,954,956,957,959,960,962,963,965],{},[426,939,940],{},"Why does pytest-parallel fail with 'cannot pickle local object' while pytest-xdist works?",[18,942,45],{}," relies on Python's standard ",[18,945,49],{}," module, which uses the ",[18,948,949],{},"pickle"," protocol to serialize test arguments, fixture return values, and closure states across process boundaries. Standard ",[18,952,949],{}," cannot serialize dynamically generated functions, lambda expressions, or objects with non-picklable C extensions. ",[18,955,41],{}," uses ",[18,958,85],{},", which implements a custom serialization layer that handles more complex Python objects and bypasses standard ",[18,961,949],{}," limitations for certain types. To resolve ",[18,964,45],{}," failures, refactor closures into module-level functions, avoid dynamic fixture generation, or switch to thread mode if the workload is I\u002FO-bound.",[14,967,968,971,972,974,975,977,978,980,981,984,985,551,987,989,990,191,993,996,997,1000],{},[426,969,970],{},"How to resolve coverage report fragmentation when using both runners?","\nFragmentation occurs because each worker generates an isolated coverage data file. Use ",[18,973,535],{}," with the ",[18,976,543],{}," flag to prevent workers from overwriting each other's data. After execution, run ",[18,979,547],{}," to merge ",[18,982,983],{},".coverage.*"," files into a unified database. For ",[18,986,41],{},[18,988,554],{}," to map worker PIDs to source files, ensuring accurate branch coverage attribution. In CI pipelines, configure ",[18,991,992],{},"coverage xml",[18,994,995],{},"coverage html"," post-merge to generate unified reports. Avoid using ",[18,998,999],{},"--cov-report=term"," during parallel runs, as it prints incomplete data before merging.",[14,1002,1003,1006,1007,28,1009,1011,1012,1014,1015,1017,1018,1020,1021,1024,1025,1027,1028,1030],{},[426,1004,1005],{},"Can pytest-xdist and pytest-parallel be combined for nested parallelism?","\nNo. Both plugins intercept ",[18,1008,31],{},[18,1010,27],{}," to distribute workloads. Combining them causes hook recursion, worker deadlocks, and unpredictable test execution ordering. ",[18,1013,41],{}," expects to control the entire distribution layer, while ",[18,1016,45],{}," assumes exclusive access to the process\u002Fthread pool. Attempting to nest them results in ",[18,1019,316],{}," raising ",[18,1022,1023],{},"HookCallError"," or silently dropping tests. Choose one runner based on workload profile: ",[18,1026,41],{}," for CPU-bound, isolated suites with heavy fixtures; ",[18,1029,45],{}," for lightweight, I\u002FO-bound tests with minimal shared state.",[14,1032,1033,1036,1037,1040,1041,1043,1044,1046,1047,1049,1050,1052,1053,1055,1056,1059,1060,1063],{},[426,1034,1035],{},"What causes 'Worker crashed' errors under heavy I\u002FO load?","\nWorker crashes under I\u002FO load typically stem from OS-level resource exhaustion, not test logic failures. High concurrency rapidly consumes file descriptors, socket connections, and memory, triggering ",[18,1038,1039],{},"OSError: [Errno 24] Too many open files"," or OOM kills. Increase ",[18,1042,906],{}," to at least ",[18,1045,910],{}," in CI runners and local environments. Tune ",[18,1048,41],{},"'s ",[18,1051,600],{}," to allow automatic recovery from transient crashes without halting the suite. Profile with ",[18,1054,370],{}," to detect memory leaks in long-running workers, and ensure all I\u002FO operations use context managers or explicit ",[18,1057,1058],{},"close()"," calls. For database connections, implement connection pooling with explicit ",[18,1061,1062],{},"max_overflow"," limits to prevent pool exhaustion.",[1065,1066,1067],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .sJ8bj, html code.shiki .sJ8bj{--shiki-default:#6A737D;--shiki-dark:#6A737D}html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}",{"title":205,"searchDepth":218,"depth":218,"links":1069},[1070,1071,1072,1073,1074],{"id":70,"depth":218,"text":71},{"id":164,"depth":218,"text":165},{"id":418,"depth":218,"text":419},{"id":604,"depth":218,"text":605},{"id":934,"depth":218,"text":935},"Selecting the correct parallel execution engine for a mature pytest suite requires moving beyond superficial benchmark numbers and understanding the underlying execution semantics, serialization boundaries, and hook interception models. The pytest-xdist vs pytest-parallel performance comparison ultimately resolves to a trade-off between strict process isolation and lightweight concurrency overhead. Both plugins fundamentally alter pytest’s default sequential execution loop, but they achieve this through divergent architectural paradigms that dictate their suitability for CI\u002FCD pipelines, local development workflows, and complex test topologies.","md",{},"\u002Fadvanced-pytest-architecture-configuration\u002Foptimizing-test-discovery\u002Fpytest-xdist-vs-pytest-parallel-performance-comparison",{"title":5,"description":1075},"advanced-pytest-architecture-configuration\u002Foptimizing-test-discovery\u002Fpytest-xdist-vs-pytest-parallel-performance-comparison\u002Findex","w88PFkqdr6cSJMQwlq45QPU1wP54b_Qaz1zuXFeuc5s",1778004579207]