Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This is super cool! #11

Closed
turnipsforme opened this issue Jun 7, 2023 · 9 comments
Closed

This is super cool! #11

turnipsforme opened this issue Jun 7, 2023 · 9 comments

Comments

@turnipsforme
Copy link

Hi! LOOOVE the idea behind this (for privacy reasons, but also the idea of running a model locally is just super cool). I can't get past downloading a model and sending a message. not getting anything back. on m1 macbook air. thanks!

@louisgv
Copy link
Owner

louisgv commented Jun 8, 2023

@turnipsforme which model did you download? - There's a bug I found recently, will patch soon to enable model with smaller context windows like Dolly.

@turnipsforme
Copy link
Author

i couldn't get any model to work other than MPT 7B, and that one primarily returned gibberish (speaking in symbols, chinese, or just random sentences). this was actually pretty funny, at one point it said "Hello, ���� come to my world" in chinese in response to me saying hello (in english haha)

@louisgv
Copy link
Owner

louisgv commented Jun 8, 2023

@turnipsforme how much RAM do you have? Can you try the wizard model? Those should work very well at least on my machine if you can load MPT :-?...

@LLukas22
Copy link
Collaborator

LLukas22 commented Jun 8, 2023

@turnipsforme The MPT models are very bound to their prompt formats, especially if you use the Instruct or Chat versions, furthermore this project doesn't use the huggingface tokenizers yet. Which results in some weird decoding errors.

@louisgv
Copy link
Owner

louisgv commented Jun 8, 2023

Ooh right I need to add the remote vocab soon, need to embed into model look-up somehow :d.....

@louisgv
Copy link
Owner

louisgv commented Jun 8, 2023

@turnipsforme Just shipped an update fix which should enable more model such as Dolly and GPTJ, please try them out. Also I'd test out Wizard model if you machine can run it :)

@turnipsforme
Copy link
Author

Yess both gptj and dolly work! i only have 8 gigs of ram, maybe what’s why? wizard works too (it’s the closest so far to chatgpt in terms of smarts, everyone is else has been a little.. off haha). regardless dolly responds suuuper quickly and app feels much better now! looking forward to seeing this continue to grow!

@turnipsforme
Copy link
Author

I just wanted to also give some thoughts on this in terms of uses. I would personally primarily use this to help with text transformation (shorter/longer/remove elements/rewrite as a checklist…) since sensitive work documents can’t go into chatgpt. if the app was better suited for this kind of work it would be amazing (i think a lot of people’s work is sensitive and has to stay local)! either way best of luck!!:)

@louisgv
Copy link
Owner

louisgv commented Jun 11, 2023

@turnipsforme yeah 8GB of ram is too little to run 7B model and even 3B model IMO. You need at least 12GB (or 16GB preferred) for 3B model to run at an ok speed.

i think a lot of people’s work is sensitive and has to stay local

That's one of the core purpose for local.ai IMO!

@louisgv louisgv mentioned this issue Jun 12, 2023
@louisgv louisgv changed the title this is super cool!! can't get it to work (I'm not very technically proficient) This is super cool! Jul 6, 2023
Repository owner locked and limited conversation to collaborators Jul 6, 2023
@louisgv louisgv converted this issue into discussion #72 Jul 6, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants