Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Redis clients add 1GB/s memory usage through a CLOSE_WAIT server connection #13496

Open
jaques-sam opened this issue Aug 27, 2024 · 16 comments
Labels
state:to-be-closed requesting the core team to close the issue

Comments

@jaques-sam
Copy link

Describe the bug

I have 1 GB of memory usage extra per second (!) when trying to connect with redis client to a redis-server which was started in a docker which is not running anymore.

To reproduce

  1. Start a redis-server & from a docker which has forwarded the default redis port or has access via the docker run argument --host
  2. Stop the docker
  3. Check if the connection is in CLOSE_WAIT state

lsof -i :6379
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
code 530802 sam 39u IPv4 45625157 0t0 TCP localhost:redis (LISTEN)
code 530802 sam 44u IPv4 47196288 0t0 TCP localhost:redis->localhost:56540 (CLOSE_WAIT)

Expected behavior

CLI clients should definitely not require an infinite amount of memory.
Furthermore, I expect it to fail or timeout if it cannot connect to a server on the redis port.

Additional information

I've used the following clients:

@sundb
Copy link
Collaborator

sundb commented Aug 27, 2024

do you mean the rust client consume more than 1gb even it disconnected?

@jaques-sam
Copy link
Author

Yes! It tries to connect to the redis over the port that is still in CLOSE_WAIT state.

@sundb
Copy link
Collaborator

sundb commented Aug 28, 2024

the reason should be that the rs client does not realize that the Reids in docker has been shut down , causing the connection to be in CLOSE_WAIT state.
you can use PING in the timer to periodically check if the server has been shut down.

@sundb
Copy link
Collaborator

sundb commented Sep 5, 2024

@jaques-sam any news? if no i'll mark it as close.

@sundb sundb added the state:to-be-closed requesting the core team to close the issue label Sep 5, 2024
@jaques-sam
Copy link
Author

Uh, the remaining issue is imho a major bug and needs solving... How come an application can reserve such a tremendous amount of memory (1GB/s)?! This should be tackled. Closing this will simply hide the issue.

@sundb
Copy link
Collaborator

sundb commented Sep 5, 2024

@jaques-sam regarding this, you can make a issue for help in https://github.com/redis-rs/redis-rs.

@jaques-sam
Copy link
Author

Mmm, it's also happening with redis-cli, and probably with other redis clients as well.

@sundb
Copy link
Collaborator

sundb commented Sep 5, 2024

@jaques-sam can you give the reproduce steps by using redis-cli?

@jaques-sam
Copy link
Author

jaques-sam commented Sep 5, 2024

The production steps are as said in the first message:

  1. Add the forward port 6379 from a dev container in VSCode
  2. Check if the connection is in CLOSE_WAIT state
lsof -i :6379
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
code 530802 sam 39u IPv4 45625157 0t0 TCP localhost:redis (LISTEN)
code 530802 sam 44u IPv4 47196288 0t0 TCP localhost:redis->localhost:56540 (CLOSE_WAIT)
  1. Just enter the command redis-cli
  2. Check total memory consumption being increased with (h)top by ~1GB/s

@sundb
Copy link
Collaborator

sundb commented Sep 5, 2024

@jaques-sam where do you see memory growing at 1gb per second? from 'ps'? please give the info.

@jaques-sam
Copy link
Author

jaques-sam commented Sep 5, 2024

I couldn't reproduce it myself anymore, so start trying it out again....
I remember I had to remove the port forwarding address from VSCode in a dev container to fix it:

image

This is the reason why the port is in CLOSE_WAIT state. This is even the case when the docker is still running. Sorry for the confusion.

It's strange I don't see MEM% being high, only my main memory is getting full:

image

After couple of seconds:

image

As you can see, redis-cli gives no output and seems blocked, memory increases but it's not sure where...
It's for sure redis-cli as that's the command that is running, and actually it proves this is the case when shutting it down, all those GBs in memory are released.

@sundb
Copy link
Collaborator

sundb commented Sep 5, 2024

@jaques-sam please try gdb -batch -ex "bt" -p pid to see what redis is doing now.
and try ps aux|grep redis-cli to see the memory usage of redis-cli.

@jaques-sam
Copy link
Author

jaques-sam commented Sep 5, 2024

ps aux | rg redis
sam       804823  0.0  0.0  20616  4148 pts/2    S    11:47   0:00 redis-cli

gdb -batch -ex "bt" -p 804823

This GDB supports auto-downloading debuginfo from the following URLs:
  <https://debuginfod.fedoraproject.org/>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f8dbd72c5dd in recv () from /lib64/libc.so.6
#0  0x00007f8dbd72c5dd in recv () from /lib64/libc.so.6
#1  0x0000556a235c4b6d in redisNetRead ()
#2  0x0000556a235cbd1c in redisBufferRead ()
#3  0x0000556a235ccc21 in redisGetReply ()
#4  0x0000556a235ccde4 in redisCommand ()
#5  0x0000556a235a8e90 in cliInitHelp.lto_priv.0 ()
#6  0x0000556a235ae433 in repl.lto_priv ()
#7  0x0000556a2359de79 in main ()
[Inferior 1 (process 804823) detached]

Memory is not increasing here:

ps aux | rg redis
sam       804823  0.0  0.0  20616  4252 pts/2    S    11:50   0:00 redis-cli
ps aux | rg redis
sam       804823  0.0  0.0  20616  4252 pts/2    S    11:50   0:00 redis-cli
ps aux | rg redis
sam       804823  0.0  0.0  20616  4252 pts/2    S    11:51   0:00 redis-cli
ps aux | rg redis
sam       804823  0.0  0.0  20616  4252 pts/2    S    11:51   0:00 redis-cli
ps aux | rg redis
sam       804823  0.0  0.0  20616  4252 pts/2    S    11:51   0:00 redis-cli

@sundb
Copy link
Collaborator

sundb commented Sep 6, 2024

from your ouput we can see than redis-cli just consume a littlt memory.
it doesn't get stuck, but rather that it can't receive a reply(i don't know why it doesn't timeout, maybe a bug).
did you forget to turn off the forward port in the vscode, i suspect that it may cause the problem.

@jaques-sam
Copy link
Author

As said:

  • removing the forwarding port in VSCode fixes the problem, it dumps the GBs in my main memory
  • quiting redis-cli also dumps the GBs in my main memory

Because redis-cli is not increasing in memory in (h)top/ps, isn't that because memory is consumed in Kernel space?

@sundb
Copy link
Collaborator

sundb commented Sep 6, 2024

As said:

  • removing the forwarding port in VSCode fixes the problem, it dumps the GBs in my main memory
  • quiting redis-cli also dumps the GBs in my main memory

Because redis-cli is not increasing in memory in (h)top/ps, isn't that because memory is consumed in Kernel space?

@jaques-sam no, i guess it's caused by vscode.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state:to-be-closed requesting the core team to close the issue
Projects
None yet
Development

No branches or pull requests

2 participants