-
-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trouble with File Upload in Self-Hosted Supabase Using Docker #16857
Comments
Bumping |
I have the same behaviour, running supabase on docker inside a nested unprevilages LXC container on proxmox. |
@ivasilov Problem Still exists with studio:v0.23.09_amd64 with this error: |
@ivasilov, the latest image tag is v0.23.09 same issue as @aabulmagd. any progress ? |
@dfang having the same issue: I've just updated the docker-compose with studio version v0.23.09 but still having those console log errors. |
Same thing |
same thing, the progress gets stuck at 0%. any progress pls? |
As it turns out I had a problem with the http (not https) specified in the .env, since I'm using nginx over kong with custom SSL certificates. |
I managed to upload files with a python script. |
Hi all — this still an issue? Just tested and looks resolved to me. |
Got the same problem here on my side. Running via Docker too and changed every localhost to my local ip (192.168.1.202) in the .env file. When uploading via bucket gui on my browser the POST method get postet to http://localhost:8000/storage/v1/object/files/ All GET methods are resolved under http://192.168.1.202:8000/api/storage/default/buckets //edit So today I got back to my office and tried to upload a file again. Now it routes correctly to my lan ip. SITE_URL=http://192.168.1.202:3000 Its strange it works today because i haven´t change anything overnight. Maybe its useful. |
It's worked for me. @supabase should auto modify script installation to detect localhost to IP. |
Getting the same issue here, I am able to upload via SDK but not via studio GUI |
I have the same error. |
Hi guys, this still an issue? I've just pulled the latest version, and it works fine for me. If possible, could you guys post some images of the logs for |
I have the same error when trying to upload. Can't pull the logs as when I run the default query I get: The DB seems to work fine. Via studio I add tables and data The setup is brand new pull on hetzner cloud. No env changes as I understood from the docs that I should work fine as is (just testing here, so no security risks) |
I've looked into this more closely, this comment solves it. You need to set If you're still seeing this issue, please share your |
Still have the issue. My env: |
it is a clean install on Hetzner I don't use for anything right now, so if you want to, you are welcome to investigate on the system itself. |
Yeah, that would be great! Can you email me more info at [email protected] about your instance? |
Had the exact same issue.. |
any update on this? Have the same issue |
Same issue, changed all localhost to the appropriate LAN IP in the .env file. Docker restart. Still no uploads via studio. |
Ok, I got this working. Notice the https as it's a public route through your proxy. |
Just a quick intro to all these env vars:
Those are ok. Not important when self-hosting. |
@ivasilov it would be cool to be able to remove the subscriptions module for self-hosted so to prevent all those errors. |
Can't open http://localhost:3000 in browser, http://localhost:8000 works well though. And also having a lot of 404 and not working AI SQL helper - trying to use gives a huge unclosable error popup (HTTP/1.1 500 Internal Server Error). curl query and screenshot herecurl 'http://localhost:8000/api/ai/sql/edit' \
-H 'Accept: application/json' \
-H 'Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,et;q=0.6' \
-H 'Authorization: Basic YWRtaW46YWRtaW4=' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
-H 'Cookie: _logflare_key=SFMyNTY.g3QAAAACbQAAAAtfY3NyZl90b2tlbm0AAAAYcWlXX2lhaFpudy1tOVMtR19GYnhpWmVEbQAAAAd1c2VyX2lkYQE.j8AUTkki4J8Fn9TAj7SLQkvMSUbyYEszcmwikTKltnY; __stripe_mid=120755d5-e809-4eb0-9064-c63216755c666c7008' \
-H 'Origin: http://localhost:8000' \
-H 'Referer: http://localhost:8000/project/default/sql/1' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36' \
-H 'X-Request-Id: 7492f881-a6f6-4ecc-b373-f91b3668c586' \
-H 'sec-ch-ua: "Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'sec-ch-ua-platform: "Windows"' \
--data-raw $'{"prompt":"show current postgre version","sql":"select * from\\n (select version()) as version,\\n (select current_setting(\'server_version_num\')) as version_number;","entityDefinitions":["CREATE TABLE public.mydata (\\n id bigint GENERATED BY DEFAULT AS IDENTITY ,\\n text text NOT NULL,\\n CONSTRAINT mydata_pkey PRIMARY KEY (id)\\n) TABLESPACE pg_default;"]}' |
I am triy to answer this question. I'm also using Nginx Proxy Manager. When we need to access the storage service, Nginx Proxy Manager will proxy the request to our self-hosted server. And the configuration on the server shows that this request needs to be processed by localhost:8000. So, the javascript console shows that our request is forwarded to localhost:8000. And this localhost will be interpreted as the machine where our own client is located, not the localhost on the remote host. Therefore, the upload fails. So, you change the service address in .env to your domain name (https://my-extradomain.com/). I think it works and is also secure. Because Nginx Proxy Manager will redirect to this address when making a request for the storage service. And this request is still SSL encrypted. It will not expose internal information of the server. How do others understand this process? Proxy Manager will redirect to this address when making the request to the storage service. And this request is still SSL encrypted. It will not expose the internal information of the server. How do others understand this process? |
AFAIK the connection between user and nginx will be secured (across the whole internet). The connection between nginx and self-hosted supabase will be non-encrypted because nginx does SSL termination. In reality, this shouldn't matter because nginx and the supabase are usually on the same server OR in the same trusted virtual network. |
Can neither upload to buckets nor create users with authentication from the web UI |
Fresh installed supabase sehfhost on VPS NGNIX (by https://blog.activeno.de/the-ultimate-supabase-self-hosting-guide). Same bug in studio - cannot add user nor create folder in the backet nor upload file. |
Bug report
Describe the bug
I'm currently facing a perplexing problem while attempting to manually upload files to the Supabase bucket on my self-hosted instance using Docker. The issue arises when I initiate the upload process – the progress gets stuck at 0% and the upload never seems to commence. I've conducted this experiment on multiple computers and have tried various file types, yet regrettably, none of these attempts have proved successful.
To Reproduce
Expected behavior
The manual file upload process should initiate normally, progressing from 0% to completion, allowing the files to be uploaded successfully to the Supabase bucket.
Screenshots
System information
Additional context
The text was updated successfully, but these errors were encountered: