Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/0.4.13 to main #2916

Merged
merged 27 commits into from
May 16, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
27 commits
Select commit Hold shift click to select a range
012cc80
Merge pull request #2815 from janhq/main
Van-QA Apr 25, 2024
0bad1a4
fix: remove scroll animation chat screen (#2819)
namchuai Apr 25, 2024
ce2d8e5
chore: remove nutjs (#2860)
namchuai May 2, 2024
c6182ab
Customize scroll-bar style (#2857)
QuentinMacheda May 3, 2024
2016eae
Remove hidden overflow property of tailwind Update buttons position…
QuentinMacheda May 3, 2024
c21bc08
Fix eslint issue in EditChatInput (#2864)
QuentinMacheda May 3, 2024
092a572
Feat: Remote API Parameters Correction (#2802)
hahuyhoang411 May 4, 2024
4c88d03
feat: add remote model command-r (#2868)
henryh0x1 May 6, 2024
86fda1c
feat: add model gpt-4 turbo (#2836)
henryh0x1 May 6, 2024
a6ccd67
fix: validate max_token from context_length value (#2870)
urmauur May 6, 2024
1e3e5a8
feat/implement-inference-martian-extension (#2869)
henryh0x1 May 6, 2024
9effb6a
fix: validate context length (#2871)
urmauur May 6, 2024
d226640
Add OpenRouter (#2826)
Inchoker May 6, 2024
2008aae
Feat: Correct context length for models (#2867)
hahuyhoang411 May 6, 2024
0406b51
fix: stop auto scroll if user manually scrolling up (#2874)
namchuai May 6, 2024
efbc96d
feat: inference anthropic extension (#2885)
henryh0x1 May 11, 2024
6af4a2d
feat: add deeplink support (#2883)
namchuai May 13, 2024
1e0d4f3
Feat: Adjust model hub v0.4.13 (#2879)
hahuyhoang411 May 13, 2024
08d15e5
fix: deeplink when app not open on linux (#2893)
namchuai May 13, 2024
eb7e963
add: gpt4o (#2899)
hahuyhoang411 May 14, 2024
aa1f01f
Revert "chore: remove nutjs" and replace nutjs version (#2900)
Van-QA May 15, 2024
1130979
fix: cohere stream param does not work (#2907)
louis-jan May 15, 2024
33697be
Change mac arm64 build use github runner (#2910)
hiento09 May 16, 2024
06be308
Revert "Change mac arm64 build use github runner (#2910)" (#2911)
hiento09 May 16, 2024
0436224
Revert "Revert "Change mac arm64 build use github runner (#2910)" (#2…
hiento09 May 16, 2024
2182599
Chore: Add phi3 (#2914)
hahuyhoang411 May 16, 2024
537ef20
chore: replace nitro by cortex-cpp (#2912)
louis-jan May 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Feat: Remote API Parameters Correction (#2802)
* fix: change to gpt4 turbo

* add: params

* fix: change to gpt 3.5 turbo

* delete: redundant

* fix: correct description

* version bump

* add: params

* fix: version bump

* delete: deprecated

* add: params

* add: new model

* chore: version bump

* fix: version correct

* add: params

* fix: version bump

* fix: change to gpt4 turbo

* add: params

* fix: change to gpt 3.5 turbo

* delete: redundant

* fix: correct description

* version bump

* add: params

* fix: version bump

* delete: deprecated

* add: params

* add: new model

* chore: version bump

* fix: version correct

* add: params

* fix: version bump

* fix: llama2 no longer supported

* fix: reverse mistral api

* fix: add params

* fix: mistral api redundant params

* fix: typo

* fix: typo

* fix: correct context length

* fix: remove stop

---------

Co-authored-by: Van Pham <64197333 [email protected]>
  • Loading branch information
hahuyhoang411 and Van-QA committed May 4, 2024
commit 092a57268453f36a832e893e09a76ad9fecd2eb6
2 changes: 1 addition & 1 deletion extensions/inference-groq-extension/package.json
Original file line number Diff line number Diff line change
@@ -1,7 1,7 @@
{
"name": "@janhq/inference-groq-extension",
"productName": "Groq Inference Engine",
"version": "1.0.0",
"version": "1.0.1",
"description": "This extension enables fast Groq chat completion API calls",
"main": "dist/index.js",
"module": "dist/module.js",
Expand Down
88 changes: 30 additions & 58 deletions extensions/inference-groq-extension/resources/models.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 8,18 @@
"id": "llama3-70b-8192",
"object": "model",
"name": "Groq Llama 3 70b",
"version": "1.0",
"version": "1.1",
"description": "Groq Llama 3 70b with supercharged speed!",
"format": "api",
"settings": {
"text_model": false
},
"settings": {},
"parameters": {
"max_tokens": 8192,
"temperature": 0.7,
"top_p": 1,
"stop": null,
"stream": true
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "Meta",
Expand All @@ -36,18 36,18 @@
"id": "llama3-8b-8192",
"object": "model",
"name": "Groq Llama 3 8b",
"version": "1.0",
"version": "1.1",
"description": "Groq Llama 3 8b with supercharged speed!",
"format": "api",
"settings": {
"text_model": false
},
"settings": {},
"parameters": {
"max_tokens": 8192,
"temperature": 0.7,
"top_p": 1,
"stop": null,
"stream": true
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "Meta",
Expand All @@ -64,53 64,25 @@
"id": "gemma-7b-it",
"object": "model",
"name": "Groq Gemma 7b Instruct",
"version": "1.0",
"version": "1.1",
"description": "Groq Gemma 7b Instruct with supercharged speed!",
"format": "api",
"settings": {
"text_model": false
},
"settings": {},
"parameters": {
"max_tokens": 4096,
"max_tokens": 8192,
"temperature": 0.7,
"top_p": 1,
"stop": null,
"stream": true
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "Google",
"tags": ["General"]
},
"engine": "groq"
},
{
"sources": [
{
"url": "https://groq.com"
}
],
"id": "llama2-70b-4096",
"object": "model",
"name": "Groq Llama 2 70b",
"version": "1.0",
"description": "Groq Llama 2 70b with supercharged speed!",
"format": "api",
"settings": {
"text_model": false
},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7,
"top_p": 1,
"stop": null,
"stream": true
},
"metadata": {
"author": "Meta",
"tags": ["General", "Big Context Length"]
},
"engine": "groq"
},
{
"sources": [
{
Expand All @@ -120,18 92,18 @@
"id": "mixtral-8x7b-32768",
"object": "model",
"name": "Groq Mixtral 8x7b Instruct",
"version": "1.0",
"version": "1.1",
"description": "Groq Mixtral 8x7b Instruct is Mixtral with supercharged speed!",
"format": "api",
"settings": {
"text_model": false
},
"settings": {},
"parameters": {
"max_tokens": 4096,
"max_tokens": 32768,
"temperature": 0.7,
"top_p": 1,
"stop": null,
"stream": true
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "Mistral",
Expand Down
2 changes: 1 addition & 1 deletion extensions/inference-mistral-extension/package.json
Original file line number Diff line number Diff line change
@@ -1,7 1,7 @@
{
"name": "@janhq/inference-mistral-extension",
"productName": "MistralAI Inference Engine",
"version": "1.0.0",
"version": "1.0.1",
"description": "This extension enables Mistral chat completion API calls",
"main": "dist/index.js",
"module": "dist/module.js",
Expand Down
52 changes: 25 additions & 27 deletions extensions/inference-mistral-extension/resources/models.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 8,20 @@
"id": "mistral-small-latest",
"object": "model",
"name": "Mistral Small",
"version": "1.0",
"description": "Mistral Small is the ideal choice for simpe tasks that one can do in builk - like Classification, Customer Support, or Text Generation. It offers excellent performance at an affordable price point.",
"version": "1.1",
"description": "Mistral Small is the ideal choice for simple tasks (Classification, Customer Support, or Text Generation) at an affordable price.",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"max_tokens": 32000,
"temperature": 0.7,
"top_p": 0.95,
"stream": true
},
"metadata": {
"author": "Mistral",
"tags": [
"Classification",
"Customer Support",
"Text Generation"
"General"
]
},
"engine": "mistral"
Expand All @@ -32,24 32,23 @@
"url": "https://docs.mistral.ai/api/"
}
],
"id": "mistral-medium-latest",
"id": "mistral-large-latest",
"object": "model",
"name": "Mistral Medium",
"version": "1.0",
"description": "Mistral Medium is the ideal for intermediate tasks that require moderate reasoning - like Data extraction, Summarizing a Document, Writing a Job Description, or Writing Product Descriptions. Mistral Medium strikes a balance between performance and capability, making it suitable for a wide range of tasks that only require language transformaion",
"name": "Mistral Large",
"version": "1.1",
"description": "Mistral Large is ideal for complex tasks (Synthetic Text Generation, Code Generation, RAG, or Agents).",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"max_tokens": 32000,
"temperature": 0.7,
"top_p": 0.95,
"stream": true
},
"metadata": {
"author": "Mistral",
"tags": [
"Data extraction",
"Summarizing a Document",
"Writing a Job Description",
"Writing Product Descriptions"
"General"
]
},
"engine": "mistral"
Expand All @@ -60,24 59,23 @@
"url": "https://docs.mistral.ai/api/"
}
],
"id": "mistral-large-latest",
"id": "open-mixtral-8x22b",
"object": "model",
"name": "Mistral Large",
"version": "1.0",
"description": "Mistral Large is ideal for complex tasks that require large reasoning capabilities or are highly specialized - like Synthetic Text Generation, Code Generation, RAG, or Agents.",
"name": "Mixtral 8x22B",
"version": "1.1",
"description": "Mixtral 8x22B is a high-performance, cost-effective model designed for complex tasks.",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"max_tokens": 32000,
"temperature": 0.7,
"top_p": 0.95,
"stream": true
},
"metadata": {
"author": "Mistral",
"tags": [
"Text Generation",
"Code Generation",
"RAG",
"Agents"
"General"
]
},
"engine": "mistral"
Expand Down
2 changes: 1 addition & 1 deletion extensions/inference-openai-extension/package.json
Original file line number Diff line number Diff line change
@@ -1,7 1,7 @@
{
"name": "@janhq/inference-openai-extension",
"productName": "OpenAI Inference Engine",
"version": "1.0.0",
"version": "1.0.1",
"description": "This extension enables OpenAI chat completion API calls",
"main": "dist/index.js",
"module": "dist/module.js",
Expand Down
59 changes: 24 additions & 35 deletions extensions/inference-openai-extension/resources/models.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 5,25 @@
"url": "https://openai.com"
}
],
"id": "gpt-4",
"id": "gpt-4-turbo",
"object": "model",
"name": "OpenAI GPT 4",
"version": "1.0",
"version": "1.1",
"description": "OpenAI GPT 4 model is extremely good",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
"tags": ["General"]
},
"engine": "openai"
},
Expand All @@ -31,43 36,22 @@
"id": "gpt-4-vision-preview",
"object": "model",
"name": "OpenAI GPT 4 with Vision (Preview)",
"version": "1.0",
"description": "OpenAI GPT 4 with Vision model is extremely good in preview",
"version": "1.1",
"description": "OpenAI GPT-4 Vision model features vision understanding capabilities",
"format": "api",
"settings": {
"vision_model": true,
"textModel": false
},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"temperature": 0.7,
"top_p": 0.95,
"stream": true
},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length", "Vision"]
},
"engine": "openai"
},
{
"sources": [
{
"url": "https://openai.com"
}
],
"id": "gpt-3.5-turbo-16k-0613",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo 16k 0613",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo 16k 0613 model is extremely good",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
"tags": ["General", "Vision"]
},
"engine": "openai"
},
Expand All @@ -80,17 64,22 @@
"id": "gpt-3.5-turbo",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo model is extremely good",
"version": "1.1",
"description": "OpenAI GPT 3.5 Turbo model is extremely fast",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
"tags": ["General"]
},
"engine": "openai"
}
Expand Down
Loading