Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Manually invalidate slot cache #2516

Open
jkenn99 opened this issue Jun 27, 2024 · 7 comments
Open

Feature request: Manually invalidate slot cache #2516

jkenn99 opened this issue Jun 27, 2024 · 7 comments
Assignees

Comments

@jkenn99
Copy link

jkenn99 commented Jun 27, 2024

I'm running into an issue where RedisCluster keeps trying to use to non-existent cluster nodes when redis.clusters.cache_slots is enabled. Currently, the only way to invalidate the cache is to restart php-fpm. Instead, I would like to build slot cache invalidation into my PHP application and would like RedisCluster to expose a PHP function to permit this.

@michael-grunder michael-grunder self-assigned this Jun 27, 2024
@michael-grunder
Copy link
Member

Theoretically, fixing the underlying cache invalidation problem would solve the issue as well. As always the trick is replicating the failure.

I can provide a method to invalidate the cache, but some complexity here will have to be worked through. The cache does not live in shared memory, meaning it is a global persistent variable in each php-fpm child process.

Is your idea to try and flush the cache when you encounter a failure?

@jkenn99
Copy link
Author

jkenn99 commented Jun 27, 2024

fixing the underlying cache invalidation problem would solve the issue as well

Yes, this would also work, but I am not sure at this point how aggressive you would need to be with the invalidation.

Is your idea to try and flush the cache when you encounter a failure?

Yes, and to experiment on the conditions under which to it should occur.

@michael-grunder
Copy link
Member

Yes, this would also work, but I am not sure at this point how aggressive you would need to be with the invalidation.

I'll double-check to be sure but we flush the cache when we receive a MOVED or ASKING reply. The thinking here is that migrations are infrequent and CLUSTER SLOTS is cheap.

Implementing the method should actually be simple, so it shouldn't take very long.

@jkenn99
Copy link
Author

jkenn99 commented Jun 27, 2024

I'll double-check to be sure but we flush the cache when we receive a MOVED or ASKING reply. The thinking here is that migrations are infrequent and CLUSTER SLOTS is cheap.

I'm not running into issues with a slot getting migrated, it is when a cluster node is replaced. For example, on Elasticache cluster slots returns some slot range like:

1) 1) (integer) 0
    2) (integer) 100
    3) 1) "10.0.0.0"
       2) (integer) 6379
       3) "some-id"
       4) 1) "hostname"
          2) ""
    4) 1) "10.0.0.1"
       2) (integer) 6379
       3) "another-id"
       4) 1) "hostname"
          2) ""

if the host 10.0.0.1 is replaced by 10.0.0.2, and cache_slots is enabled, we will keep trying to connect to 10.0.0.1 until php-fpm is restarted.

@jkenn99
Copy link
Author

jkenn99 commented Jun 27, 2024

Further details since it sounds like you want to fix this in phpredis:

No exception is being thrown by RedisCluster for this event, instead I'm getting this notice:

Notice: RedisCluster::mget(): Send of 18 bytes failed with errno=32 Broken pipe

@michael-grunder
Copy link
Member

I'm going to try and solve the underlying issue of PhpRedis not updating the cache in this situation. That's clearly a bug.

However, I see no reason not to also provide a utility function to manually flush the cache. It's simple, and I don't really see any downside.

@jkenn99
Copy link
Author

jkenn99 commented Sep 17, 2024

I'm going to try and solve the underlying issue of PhpRedis not updating the cache in this situation. That's clearly a bug.

Have you had any movement on this? I'm happy to help test any experimental branch if you have one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants