You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a cluster in two dcs with one node and one rack. regular keys seems to be consistent but I have a set with 65000 ips and I use spop and sadd to take and return an ip (data sessions management) currently my servers are only connecting to one dc and the ip pool in the other dc seems to be emptying but not refilling the dc see that is being updated directly usually has about 61000 ip left in the other dc is going lower and lower
I did see the that some do go back in
I have hight traffic
here is my conf
dyn_o_mite:
datacenter: dc2
rack: rack1
dyn_listen: 0.0.0.0:8101
dyn_seeds:
I found the problem.
I was using SPOP to get an IP and the way dynomite works is that it just does SPOP on all the other nodes so really when i wanted an IP it took out 2 but only one was retuned when the session was over because only was used and recorded.
Is this behaviour intentional and I'm just not using SPOP with a cluster correctly or is this a bug?
I have a cluster in two dcs with one node and one rack. regular keys seems to be consistent but I have a set with 65000 ips and I use spop and sadd to take and return an ip (data sessions management) currently my servers are only connecting to one dc and the ip pool in the other dc seems to be emptying but not refilling the dc see that is being updated directly usually has about 61000 ip left in the other dc is going lower and lower
I did see the that some do go back in
I have hight traffic
here is my conf
dyn_o_mite:
datacenter: dc2
rack: rack1
dyn_listen: 0.0.0.0:8101
dyn_seeds:
listen: 0.0.0.0:6379
servers:
tokens: '12345678'
secure_server_option: datacenter
pem_key_file: conf/dynomite.pem
data_store: 0
stats_listen: 0.0.0.0:22222
mbuf_size: 512000
I always have 64 client connection to my dynomite that is being interacted with directly
The text was updated successfully, but these errors were encountered: