-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConcurrentDictionary.TryRemove
sometimes does not remove an existing key
#107525
Comments
Tagging subscribers to this area: @dotnet/area-system-collections |
Don't do this. Use a separate object (and on .NET 9 , use the new This use of locks would be suspicious real-world code, because it would mean you don't actually need
These modifications are on the entry retrieved, but:
Regardless of any potential bug discovered here, if this reflects your real-world code you may have much larger application design issues. |
Thanks for the quality report. The bug is #82004. It tries to optimize removal on an empty bucket by reading the bucket's count prior to taking the lock. TryAdd adds an element and then increments the corresponding bucket count. That means there's a period of time between when ContainsKey/TryGetValue can find the item in the dictionary but TryRemove misses it if the concurrent TryAdd adds the item to an otherwise empty bucket. The right answer is to revert that PR, unfortunately. |
Description
I noticed strange behavior of the
ConcurrentDictionary.TryRemove
method in the latest stable version of .NET 8.0. While it is expected to always remove an existing key as per Stephen Toub's answer and de-facto behavior in all the previous versions of .NET (including .NET 6.0 and 7.0) and .NET Framework, it doesn't do so in all the latest versions.The problem likely occurs, when there are concurrent calls to
GetOrAdd
orTryAdd
method that even don't modify anything, returning already existing item orfalse
respectively, so looks like there's a race condition somewhere.This behavior started to appear in the
SDK 8.0.100-preview.2
version, and doesn't exist in theSDK 8.0.100-preview.1
version or earlier.Please let me know if more information is required.
Reproduction Steps
Please consider the following sample program. It creates an instance of the
ConcurrentDictionary
class and performs simple parallel workloads in each iteration of thewhile
loop. During each parallel iteration, it calls theGetOrAdd
method that will create an entry that can be processed and removed only by a single thread at the same time – other threads will be waiting until the first winner remove the entry untilGetOrAdd
will modify anything.All the waits are global – locks are performed on the
dictionary
instance to highlight the problem. Each entry is processed only by a single thread, and the behavior is enforced by such a "global" lock, so it's impossible that any other thread will remove anything between the calls toContainsKey
andTryRemove
methods.So when
ContainsKey
returnstrue
, it is expected that theTryRemove
method call succeeds and also returnstrue
. But as we see from the messages in .NET 8.0, this is not true anymore.The problem does not occur, if we disable parallelism.
As a workaround, we can force the removal by calling
TryRemove
in a loop, and eventually it will remove the entry, but it is not something that's expected to add in a concurrent dictionary unless documented.Expected behavior
The sample program is expected to run indefinitely without any "failed" messages like it does in .NET 6.0/7.0 and earlier:
Actual behavior
But in .NET 8.0 and later versions, it shows "failed" messages like the following ones within the first N seconds:
Regression?
The sample above works fine in
SDK 8.0.100-preview.1
and below (.NET 7.0, .NET 6.0 and .NET Framework 4.8), but doesn't work inSDK 8.0.100-preview.2
and above, including .NET 8.0.401 and 9.0.100-preview.7.Known Workarounds
Removal eventually completes when calling it in a
while
loop and checking the result:Configuration
Version: .NET 8.0.401
OS: Windows 11 and Ubuntu 24.04
Architecture: x64
Processors: 24
Other information
No response
The text was updated successfully, but these errors were encountered: