perf: reduce large pre-allocations for JavascriptParser::new
#7286
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
AsyncDependenciesBlock
is of the size of 208 bytes. It was pre-allocated for the capacity of 256:Taking a project with 10000 modules. It will at least allocate (and destroy) the memory of
256 * 208 * 10000
bytes (507Mbs), causing a memory increment and decrement back and forth.Since the layout
AsyncDependenciesBlock
is quite compat, two approaches remain:Vec<AsyncDependenciesBlock>
toVec<Box<AsyncDependenciesBlock>>
Vec
to a smaller valueThe first solution is to reduce the memory usage from
256 * 208
to256 * 24
, giving us a significant memory reduce, however with some potential overhead for sending the value to the heap.With the second capacity change from
256
to64
.I also changed the initial capacity of other vecs, although it does not(and should not) have much memory cost difference.
Performance for big projects
The performance difference between these two changes of our internal project of 38134 modules remain unnoticeable. It's around 30 seconds for each build on my computer (Apple M2 Max, 64 GB).
Memory cost
The memory cost of this is a constant value for each module. With the difference number of modules, it results in different amount of allocations created and destroyed. Taking 10000 modules case as an example, we can notice the difference of memory being created and destroyed during each build:
Before:
After:
It's 12.5% improvement of the total memory being allocated.
Clippy complaints of
Vec<Box<T>> where T: Sized
Clippy is complaining that
T
ofVec<T>
was already allocated on the heap. Since it's a huge struct with that much of pre-allocation, we can workaround this as it was said in the detail page: rust-lang/rust-clippy#3530Checklist