-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add performance benchmark #88
Conversation
Block quotes don't seem to be getting rendered correctly: https://181-240151150-gh.circle-artifacts.com/0/html/using/benchmark.html#blockquote. However, it seems to be parsing as expected: > This is the first level of quoting.
>
> > This is nested blockquote.
>
> Back to the first level. <document source="notset">
<block_quote>
<paragraph>
This is the first level of quoting.
<block_quote>
<paragraph>
This is nested blockquote.
<paragraph>
Back to the first level. |
@choldgraf the issue above looks to be another fix for the |
This reverts commit 5761fdb.
good catch, opened up: pydata/pydata-sphinx-theme#103 btw, I'm curious how this maps on to, say, the amount of content that the QuantEcon book has. It seems like our Sphinx parser will take relatively more time, but what about the absolute time for an amount of content like what the QE lectures have? |
How would you envisage benchmarking this? As we saw before in your profiling, really the bottleneck will be in calling certain roles/directives that do a lot of processing (perhaps we could add a profiller for that, or upstream to Sphinx). The raw parsing speed would mainly be a factor if you are doing 'real-time' parsing (for linting, previews, etc); here you probably wouldn't actually call all the directives/roles (maybe just a small 'whitelist') |
@chrisjsewell I don't care so much about benchmarking this, but about having a number that we can use to convince people that performance won't be an issue here. Your test doc was 1000 lines (which is probably longer than most), and you ran the iteration 1000 times. E.g., does this mean that if this process took 70 seconds, then we have 73 / 1000 = .073 seconds per page? Or about 1 second processing per 10 pages? I'm just trying to tie these benchmarking numbers to people's expected subjective experience. |
Going back to my patented sphinx summary:
It means that stage (6) will take x/1000 seconds per page, with x being the value for the You could write a function do run/measure this, given a certain set of source files (in the same way I run contained sphinx builds in |
No description provided.