-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass affected labels to MemPostings.Delete()
#14307
Pass affected labels to MemPostings.Delete()
#14307
Conversation
As suggested by @bboreham, we can track the labels of the deleted series and avoid iterating through all the label/value combinations. This looks much faster on the MemPostings.Delete call. We don't have a benchmark on stripeSeries.gc() where we'll pay the price of iterating the labels of each one of the deleted series. Signed-off-by: Oleg Zaytsev <[email protected]>
MemPostings.Delete()
Signed-off-by: Oleg Zaytsev <[email protected]>
/prombench main |
Signed-off-by: Oleg Zaytsev <[email protected]>
I have added a benchmark for
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This is an optimization on top of the former optimization. For some reason it rings a bell that I had already read or discussed with you to only work over the affected subset 👍 So thanks for adding this.
This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]> Signed-off-by: Oleg Zaytsev <[email protected]>
The change introduced in prometheus#14307 was great for the performance of MemPostings.Delete(): we know which labels we have to touch, so we just grab the lock once and keep processing them until we're done. While this is the best for MemPostings.Delete(), it's much worse for other users of that mutex: readers and new series writes. These now have to wait until we've deleted all the affected postings, which are potentially millions. Our operation isn't so urgent, but we're not letting other users grab the mutex. While prometheus#15242 proposes a solution for this by performing small pauses from time to time and letting other callers take the mutex, it still doesn't address the elephant in the room: we're just doing too much stuff with the write mutex being held, that's is exclusive time for us. We can spread it longer over time, decreasing the impact, but the overall exclusive time is the same, and we should try to address that. What's the long operation we're doing while holding the write mutex? We're filtering the postings list for each label by building a new one, and looking up in a hashmap whether each one of those elements id deleted. These lists are potentially tens or hundreds of thousands of elements each one. Why don't we build that list while holding just a RLock()? Well there's a small issue with that: maybe a new series is added to the list after we've built that filtered replacement postings list, and before we took the write mutex. The good news is that postings are always appended in-order to the list and we are the only ones who delete things from that list, so if we just check whether the list grew, we can know for sure if we're missing something. Moreover we don't even need to rebuild the entire list if something was added, we just need to add the extra elements that the list has now compared to our snapshot. Finally, one last thing to address is that with this approach we'd be taking the write mutex once per affected label value again, which is a lot, it causes too long compactions under lots of read pressure, and we mitigated that in the past. We also can't just build all the replacement builds and then swap them in one go: we would be referencing too much memory there. It is true that while we're swapping there's still someone referencing the old slice from some reader, but not at an arbitrary scale: for a 2M series tsdb, label names which reference all series (like the typical env=prod) as 16MB of postings each one (2M * 8B). So we process labels in batches, with max 128 labels in a batch (so ideally we take one write mutex per each 128 labels) and a maximum of 10*len(allpostings) max postings in the batch (in our example that would be an extra 160MB allocated temporarily, which should be negligible in a 2M series instance. Signed-off-by: Oleg Zaytsev <[email protected]>
This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]> Signed-off-by: Oleg Zaytsev <[email protected]>
…15242) This introduces back some unlocking that was removed in #13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to #13286 we're not processing all the label-values anymore, but only the affected ones, because of #14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
…15242) This introduces back some unlocking that was removed in #13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to #13286 we're not processing all the label-values anymore, but only the affected ones, because of #14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
As suggested by @bboreham, we can track the labels of the deleted series and avoid iterating through all the label/value combinations.
This looks much faster on the MemPostings.Delete call. We don't have a benchmark on stripeSeries.gc() where we'll pay the price of iterating the labels of each one of the deleted series.