Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speeding up Tests #4497

Closed
4 of 12 tasks
pradyunsg opened this issue May 19, 2017 · 16 comments
Closed
4 of 12 tasks

Speeding up Tests #4497

pradyunsg opened this issue May 19, 2017 · 16 comments
Labels
C: tests Testing and related things type: maintenance Related to Development and Maintenance Processes

Comments

@pradyunsg
Copy link
Member

pradyunsg commented May 19, 2017

The test suite is slow. It takes a long time to run them and to get results. Because of this, occasionally the CI builds timeout due to how long it takes for the builds to complete.

There are various issues that affect the speed of the test suite. This issue is an overall tracking issue for them. I'll take a shot at each of those, given that I have the time. The intent is to try to speed up the test suite without compromising the degree of confidence on it.


(I'll add more as they come)

@xavfernandez xavfernandez added the C: tests Testing and related things label May 30, 2017
@pradyunsg
Copy link
Member Author

pradyunsg commented Jun 25, 2017

Just for self-reference later, here are the integration test run times (as reported by pytest) of the last 5 runs of the tests on Travis CI (on master):

Version #6914 #6913 #6901 #6899 #6898
2.7 1491.73 1499.83 1461.50 1465.82 1465.74
3.3 1758.89 1662.34 1653.14 1648.96 1588.04
3.4 1589.49 1757.04 1696.77 1687.61 1608.33
3.5 1797.19 1795.63 1645.96 1603.81 1658.20
3.6 1669.28 1759.57 1814.60 1669.06 1695.59
pypy 2566.34 2579.24 2633.35 2575.63 2518.47

@pradyunsg pradyunsg added the type: maintenance Related to Development and Maintenance Processes label Jun 26, 2017
@pradyunsg
Copy link
Member Author

#4586 shows a huge speedup in the CI builds. That doesn't mean this isn't useful. :)

@pradyunsg
Copy link
Member Author

It seems that virtualenv fixture takes up a huge part of the test time... A rough test shows that 98 second test run spends 35 seconds in that fixture.

@pfmoore
Copy link
Member

pfmoore commented Jul 18, 2017

Would it be worth using venv when available as a quick fix? I don't know if venv is quicker, but it might be worth a try.

@pradyunsg
Copy link
Member Author

Oh, I missed that last comment. I'll look into it soon. :)


FTR - now that YAML tests have been added, a bunch of installation tests can removed and made into YAML fixtures. That would reduce the clutter in data/packages. :)

@pradyunsg
Copy link
Member Author

Update on status quo:

Version #6914 #7863
2.7 1491.73 645.50
3.6 1669.28 767.25
pypy 2566.34 1500.26

@hugovk
Copy link
Contributor

hugovk commented May 28, 2018

Here are the start-to-finish "Ran for X min Y sec" waiting times of a full build, for the three master builds before #5436 was merged, and the three after.

Build Time
8769 34 min 46 sec
8771 35 min 25 sec
8779 34 min 54 sec
Average 35 min 02 sec
8786 20 min 27 sec
8787 19 min 47 sec
8793 19 min 49 sec
Average 20 min 01 sec

@pradyunsg
Copy link
Member Author

@hugovk Not to suggest that the CI improvements aren't awesome; this issue is for tracking improving the test-suite speed -- which affects both local development and CI.

Hence the timings noted above being of "test run times", not CI run times. :)

@pradyunsg
Copy link
Member Author

It might be handy to look at https://github.com/kvas-it/pytest-console-scripts, either as a replacement for our use of scripttest or as a way to extend scripttest.

I prefer the former.

@brainwane
Copy link
Contributor

brainwane commented Jan 27, 2020

We talked a bit about this issue in a call last week (and a little in a call on the 8th) as part of our donor-funded work on pip. @pradyunsg, as I understand it from our call last week, currently, going through the entire execution cycle across all our providers/pipelines for a single pull request is ~1hour. Is that right?

@chrahunt
Copy link
Member

Each PR takes up to 30 minutes from submission until the checks are done. See for example:

  1. Remove unnecessary write_delete_marker_file #7653 - 26m
  2. Configure tempdir registry in BaseCommand #7652 - 30m
  3. Add new make_wheel test helper #7651 - 28m

which can be calculated as the max of the times for:

  1. "Linux" Azure Pipelines check (visible in check UI)
  2. "Windows" Azure Pipelines check (visible in check UI)
  3. "macOS" Azure Pipelines check (visible in check UI)
  4. Travis CI check (visible on Travis site)
  5. GitHub actions builds

In general the Windows checks on Azure pipelines take the longest time at up to 30m.

The next slowest after Windows Azure pipelines is Travis at around 22m.

We can generally save time by:

  1. Increasing parallelization
    1. Splitting jobs across more workers - if we split up tests into multiple "jobs" that each can run on a CI worker then the individual jobs will complete faster
    2. Allocate workers with more cores - this would only really be an option for self-hosted runners on Azure Pipelines
  2. Don't wait to run long jobs - on Travis CI if we kick off everything at the same time then it would take as long as the longest job instead of the longest job from "Primary" the longest job from "Secondary", for example (see here for a specific example)
  3. Detailed analysis of where time is being spent during tests - from the investigation for Put Temp on RAM Disk for Azure Pipelines tests #7263 I have experience and tools that can help with this on Windows that I can communicate to someone.

Generally with 1 and 2 we need to keep in mind the maximum number of concurrent jobs we can have executing at any time, otherwise we may run into limitations that would cause one PR to delay another one. At the moment that is not a concern because each CI provider gives us enough concurrent executors that we could have (I think) 3 PRs submitted at the same time without them delaying each other.

@pradyunsg
Copy link
Member Author

pradyunsg commented Jan 28, 2020

It does vary a fair bit depending on the load/queue times, but a little over 30 minutes sounds about right to me -- taking a look at the more recent CI runs. Seems like my understanding of the CI run times was a bit outdated.

Looks like we've had significant improvement recently, by removing a bottleneck service in #7564 (AppVeyor) which might've brought the time down.

@pradyunsg
Copy link
Member Author

3\. I have experience and tools that can help with this on Windows that I can communicate to someone.

@chrahunt Color me interested. :)

@chrahunt
Copy link
Member

chrahunt commented Jul 8, 2020

@pradyunsg, let's hope I saved my notes. :)

@pradyunsg
Copy link
Member Author

Okay, I was able to run pip’s entire test suite on my laptop at 8 cores, in around 3 minutes. Further, we’ve completely changed our CI setup.

Do folks still think this is worth keeping an issue open for?

@pradyunsg pradyunsg added the S: awaiting response Waiting for a response/more information label Sep 2, 2021
@no-response
Copy link

no-response bot commented Sep 17, 2021

This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.

@no-response no-response bot closed this as completed Sep 17, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 18, 2021
@pradyunsg pradyunsg removed the S: awaiting response Waiting for a response/more information label Mar 17, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
C: tests Testing and related things type: maintenance Related to Development and Maintenance Processes
Projects
None yet
Development

No branches or pull requests

6 participants