This project comes from the origin project chatgpt-web which is an excellent chatgpt website.
chatgpt-web-next uses the NextJS
and TailwindCSS
to develop, and uses the railway.app
to deploy for free.
You can experience it through this website chat.helianthuswhite.cn. Of course, I won't store anything because all your conversations are store in your localstorage. when you clear your browser's cache, it's cleared too.
This is a standard nextjs
project, then you can use the install
command to install dependencies:
npm install --legacy-peer-deps
or use cnpm
:
cnpm install
After install, you need to config the environment variables.
Just create a file named .env.local
which has been ignored by the .gitignore
file.
Here are the variables you can set in the .env.local
file.
# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=
# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=
# OpenAI API Base URL - https://api.openai.com
OPENAI_API_BASE_URL=
# OpenAI API Model - https://platform.openai.com/docs/models
OPENAI_API_MODEL=
# Reverse Proxy
API_REVERSE_PROXY=
# timeout
TIMEOUT_MS=100000
# Socks Proxy Host
SOCKS_PROXY_HOST=
# Socks Proxy Port
SOCKS_PROXY_PORT=
If you set the correct variables, you can start the project by the dev
command:
npm run dev
As a normal node service, you can use the following command to build on your deploy server.
npm run build
The nextjs
will execute the build process and generate all files in the .next
folder. After build, use the start
command to start the server.
npm run start
You can also use a daemon process like pm2
like this.
pm2 start npm -- run start
Here also provides a way to deploy by using a docker image. Just run the following command in the project dir (If you have installed docker and start the service).
docker build -t chatgpt-web-next .
You can see the Dockerfile for more information about the process.
When the image produce successful, just run it as the common docker service.
docker run --name chatgpt -d -p 3000:3000 --env OPENAI_API_KEY=sk-xxxx --env SOCKS_PROXY_HOST=127.0.0.1 --env SOCKS_PROXY_PORT=7890
Note that the docker variables should be set correctly.
Uising a cloud service to deploy is recommended. railway.app、vercel、zeabur etc are good choices.
You may choose what you like and see the official docs for deploying.
Environment Variable | Required | Description |
---|---|---|
TIMEOUT_MS |
Optional | Timeout in milliseconds |
OPENAI_API_KEY |
Optional | Required for OpenAI API . apiKey can be obtained from here. |
OPENAI_ACCESS_TOKEN |
Optional | Required for Web API . accessToken can be obtained from here. |
OPENAI_API_BASE_URL |
Optional, only for OpenAI API |
API endpoint. |
OPENAI_API_MODEL |
Optional, only for OpenAI API |
API model. |
API_REVERSE_PROXY |
Optional, only for Web API |
Reverse proxy address for Web API . Details |
SOCKS_PROXY_HOST |
Optional, effective with SOCKS_PROXY_PORT |
Socks proxy. |
SOCKS_PROXY_PORT |
Optional, effective with SOCKS_PROXY_HOST |
Socks proxy port. |
Note: Changing environment variables in Railway will cause re-deployment.
MIT © helianthuswhite