You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So we have an application, where we write data to a Hash Object a lot of times.
We have incoming messages, for every record we get one every 10 seconds.
Application is written in Typescript, using ioredis as client.
We have application running in EKS, multiple replicas (horizontal scaled)
Redis is in EKS as well (Redis Stack). Module used is Redis Search.
We have two indexes on those records.
One index is a small index, which is used an API to monitor the updates.
The other index has more fields in it (that is used for a search API, to lookup records based on various fields).
The way our code works:
Message comes in
Get the Record out of Redis (hgetall)
Make updates to the Javascript Objects (based on the content of the message)
Write data back to Redis (using hset)
This is done every 10 seconds.
When we have over 28000 records (unique records), the writes can take over 80 seconds. I have logging enabled, to get the time in ms before the line which writes, and the time in ms, after the hset
I have even seen longer times.
However, when I drop the indexes, it is like 10ms, or even less.
Are there any settings I can use in the redis configuration file, or for the search module, to improve performance?
Or could we use a Read Replica to read the data, and then the master replica to write the data? (Not sure how to set that up in EKS - if this is a solution, I would need help in finding documentation in how to do that)
The text was updated successfully, but these errors were encountered:
Tks for your message @wernermorgenstern. Considering what you shared it seems that the updates are triggering the garbage collector from the index, more often than you required. You can check in the FT.INFO stats how the GC cycles go.
For tuning you can try to increase the FORK_GC_CLEAN_THRESHOLD to a higher limit, knowing that eventually you have a memory slightly increased by using a higher threshold.
So we have an application, where we write data to a Hash Object a lot of times.
We have incoming messages, for every record we get one every 10 seconds.
Application is written in Typescript, using ioredis as client.
We have application running in EKS, multiple replicas (horizontal scaled)
Redis is in EKS as well (Redis Stack). Module used is Redis Search.
We have two indexes on those records.
One index is a small index, which is used an API to monitor the updates.
The other index has more fields in it (that is used for a search API, to lookup records based on various fields).
The way our code works:
hgetall
)hset
)This is done every 10 seconds.
When we have over 28000 records (unique records), the writes can take over 80 seconds. I have logging enabled, to get the time in ms before the line which writes, and the time in ms, after the
hset
I have even seen longer times.
However, when I drop the indexes, it is like 10ms, or even less.
Are there any settings I can use in the redis configuration file, or for the search module, to improve performance?
Or could we use a Read Replica to read the data, and then the master replica to write the data? (Not sure how to set that up in EKS - if this is a solution, I would need help in finding documentation in how to do that)
The text was updated successfully, but these errors were encountered: