ms-swift: Use PEFT or Full-parameter to finetune 300 LLMs or 50 MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
-
Updated
Jul 27, 2024 - Python
ms-swift: Use PEFT or Full-parameter to finetune 300 LLMs or 50 MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
PTIT's Major Project: Website Programming - This repo contains a chatbot for a clothing store. The chatbot acts as an employee with specific knowledge about clothing consultation, website support, and store information.
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
The Mistral-Nemo-12b model has been fine-tuned for text generation tasks. This fine-tuning was performed using the Unsloth optimization framework, which significantly accelerates the training process, achieving a 2x faster fine-tuning time compared to conventional methods.
Fine-tuning GPT-3.5 and Llama3 LLMs for enhanced persona consistency in chatbots using Google's Synthetic Persona Chat dataset
perform deduplication on FLAN v2 dataset & Finetune LLaMa3 using this dataset
Llama3 Trainer aims to provide a CLI interface to orchestrate the fine-tuning of open source AI models, such as Llama3, using 3rd party services, such as [Lambda Cloud](https://lambdalabs.com/), [Hugging Face](https://huggingface.co/) and [Weights & Biases](https://wandb.ai).
fine-tuning framework
During AI internship at princelab
The project of implementing different deep learning models on doing machine-generated text detection and even the mixed human-machine text detection
⚙️ Fine-Tune 🦙 Llama 3.1, Phi-3.. Models on custom DataSet using 🕴️ unsloth & Saving to HuggingFace Hub
Materials for CSE Summer School Hackathon 2024
Add a description, image, and links to the unsloth topic page so that developers can more easily learn about it.
To associate your repository with the unsloth topic, visit your repo's landing page and select "manage topics."