-
Notifications
You must be signed in to change notification settings - Fork 541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not call caching functions with mutex in bucketIndexReader #3400
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for addressing my feedback
@@ -413,9 +413,9 @@ func (r *bucketIndexReader) fetchPostings(ctx context.Context, keys []labels.Lab | |||
// Return postings and fill LRU cache. | |||
// Truncate first 4 bytes which are length of posting. | |||
output[p.keyID] = newBigEndianPostings(pBytes[4:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a lock around this access too? It is a slice and I assume we only assign each element once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow up PR :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The answer is that we need to protect output
from concurrency, so the lock is required. But we don"t need a per bucketIndexReader
mutex: we can just have a local one. In the next PR I will get rid of bucketIndexReader.mtx
completely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but isn"t output[p.keyID] = x
executed only once for every possible p.keyID
? My argument is that there is no concurrent access on the slice elements. Each goroutine accesses its own subset of output
. Isn"t that true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You"re right. Unless bugs in the postings resizing, there"s no concurrent access for a given key and the slices is preallocated. I"m convinced we can remove it, but I still would prefer to do it in a separate PR (the one removing mtx
at all) given this PR focus is "Do not call caching functions with mutex in bucketIndexReader".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See: #3500
8659f2a
to
e6278c6
Compare
Signed-off-by: Marco Pracucci <[email protected]>
e6278c6
to
bca260a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Your counterargument is enough argument for the change for me.
…a#3400) Signed-off-by: Marco Pracucci <[email protected]> Signed-off-by: Marco Pracucci <[email protected]>
What this PR does
In
bucketIndexReader
there are a couple of places where we call the caching functions while holding thebucketIndexReader
mutex. I think this is not required, as also pointed by Dimitar in this comment #3396 (comment).Why I think it"s not necessary:
bucketIndexReader.mtx
is per tenant and block.bucketIndexReader
and I can"t see anything that requires holdingbucketIndexReader.mtx
This PR is on top of #3398.
Which issue(s) this PR fixes or relates to
N/A
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]