You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
was unable to pick Superset 2.1.2 version which we are using currently (update to 2.1.3 is coming soon as well as the switch to 3.x).
While browsing PGSQL dashboard metrics noticed high values for PGSQL "Idle in transaction" connections, saturating almost all connections available for Superset. Further examination resulted querying pg_stat_activity in order to find out the possible hints. Basically it returned ~200 queries regarding ssh_tunneling:
29836 | superset | 336391 | | 16686 | superset_user | | 172.20.37.48 | | 53656 | 2024-03-15 09:16:06.49816 00 | 2024-03-15 09:19:59.044348 00 | 2024-03-15 09:19:59.06228 00 | 2024-03-15 09:19:59.062474 00 | Client | ClientRead | idle in transaction | | | 5580636630257905055 | SELECT ssh_tunnels.uuid AS ssh_tunnels_uuid, ssh_tunnels.created_on AS ssh_tunnels_created_on, ssh_tunnels.changed_on AS ssh_tunnels_changed_on, ssh_tunnels.extra_json AS ssh_tunnels_extra_json, ssh_tunnels.id AS ssh_tunnels_id, ssh_tunnels.database_id AS ssh_tunnels_database_id, ssh_tunnels.server_address AS ssh_tunnels_server_address, ssh_tunnels.server_port AS ssh_tunnels_server_port, ssh_tunnels.username AS ssh_tunnels_username, ssh_tunnels.password AS ssh_tunnels_password, ssh_tunnels.private_key AS ssh_tunnels_private_key, ssh_tunnels.private_key_password AS ssh_tunnels_private_key_password, ssh_tunnels.created_by_fk AS ssh_tunnels_created_by_fk, ssh_tunnels.changed_by_fk AS ssh_tunnels_changed_by_fk | client backend | FROM ssh_tunnels | WHERE ssh_tunnels.database_id = 1
The suspected superset service was webserver, and indeed ~200 connection were coming from them :
So the question is do we really need to have some background ssh_tunneling queries running when in the superset ssh tunneling config, there is no SSH_TUNNELING ENABLED ?
Our config:
a) Superset SQLALCHEMY settings are default,
b) 2x web servers with 8x gunicorn workers each
c) 3 workers
How to reproduce the bug
Do not enable SSH_TUNNELING option in superset config.
Observe PGSQL metrics with 'idle in transaction' state.
Connect to PGSQL and run examine the 'idle in transaction' queries.
Screenshots/recordings
Superset version
2.1.2
Python version
3.9
Node version
Not applicable
Browser
Not applicable
Additional context
No response
Checklist
I have searched Superset docs and Slack and didn't find a solution to my problem.
I have searched the GitHub issue tracker and didn't find a similar bug report.
I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.
The text was updated successfully, but these errors were encountered:
Bug description
Hello,
was unable to pick Superset 2.1.2 version which we are using currently (update to 2.1.3 is coming soon as well as the switch to 3.x).
While browsing PGSQL dashboard metrics noticed high values for PGSQL "Idle in transaction" connections, saturating almost all connections available for Superset. Further examination resulted querying pg_stat_activity in order to find out the possible hints. Basically it returned ~200 queries regarding ssh_tunneling:
29836 | superset | 336391 | | 16686 | superset_user | | 172.20.37.48 | | 53656 | 2024-03-15 09:16:06.49816 00 | 2024-03-15 09:19:59.044348 00 | 2024-03-15 09:19:59.06228 00 | 2024-03-15 09:19:59.062474 00 | Client | ClientRead | idle in transaction | | | 5580636630257905055 | SELECT ssh_tunnels.uuid AS ssh_tunnels_uuid, ssh_tunnels.created_on AS ssh_tunnels_created_on, ssh_tunnels.changed_on AS ssh_tunnels_changed_on, ssh_tunnels.extra_json AS ssh_tunnels_extra_json, ssh_tunnels.id AS ssh_tunnels_id, ssh_tunnels.database_id AS ssh_tunnels_database_id, ssh_tunnels.server_address AS ssh_tunnels_server_address, ssh_tunnels.server_port AS ssh_tunnels_server_port, ssh_tunnels.username AS ssh_tunnels_username, ssh_tunnels.password AS ssh_tunnels_password, ssh_tunnels.private_key AS ssh_tunnels_private_key, ssh_tunnels.private_key_password AS ssh_tunnels_private_key_password, ssh_tunnels.created_by_fk AS ssh_tunnels_created_by_fk, ssh_tunnels.changed_by_fk AS ssh_tunnels_changed_by_fk | client backend | FROM ssh_tunnels | WHERE ssh_tunnels.database_id = 1
The suspected superset service was webserver, and indeed ~200 connection were coming from them :
So the question is do we really need to have some background ssh_tunneling queries running when in the superset ssh tunneling config, there is no SSH_TUNNELING ENABLED ?
Our config:
a) Superset SQLALCHEMY settings are default,
b) 2x web servers with 8x gunicorn workers each
c) 3 workers
How to reproduce the bug
Do not enable SSH_TUNNELING option in superset config.
Observe PGSQL metrics with 'idle in transaction' state.
Connect to PGSQL and run examine the 'idle in transaction' queries.
Screenshots/recordings
Superset version
2.1.2
Python version
3.9
Node version
Not applicable
Browser
Not applicable
Additional context
No response
Checklist
The text was updated successfully, but these errors were encountered: