-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable PGO for benchmarks #18
Comments
Look at autofdo with |
I do ongoing PGO research on different applications - all results are available at https://github.com/zamazan4ik/awesome-pgo . I performed some PGO benchmarks on the Test environment
BenchmarkRelease benchmarks are done with ResultsHere are the results:
At least in the provided by project benchmarks, there are measurable improvements in many cases. However, also there are some regressions.
I recommend starting with the regular PGO via instrumentation. AutoFDO is used for sampling the PGO approach. Starting with the instrumentation is generally a better idea since it has wider platform support and can be easily enabled for the project (compared to the sampling-based PGO). |
Great job presenting the results of PGO! In general, it seems like PGO increases average performance by ~5% but introduces noise. Not necessarily from run to run, but from benchmark to benchmark and version to version. Might make more sense to average PGO results over all datasets (and just live with the fact that there are only 4, so the average isn"t totally immune to noise). Could also average over all crates, and just report a single PGO result. The goal is to give users an accurate answer for 1) which crate 2) whether to try PGO. |
Enabling profile-guided optimization will provide some numbers that are the best they can be for each framework. It might be worth separating these out from the general numbers so users can get an idea of how much they stand to gain for their efforts.
The text was updated successfully, but these errors were encountered: