Stars
Unified Efficient Fine-Tuning of 100 LLMs (ACL 2024)
An Efficient "Factory" to Build Multiple LoRA Adapters
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Awesome-LLM: a curated list of Large Language Model
Fast Inference Solutions for BLOOM
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
使用Bert,ERNIE,进行中文文本分类
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Code and documentation to train Stanford's Alpaca models, and generate the data.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
A Gradio web UI for Large Language Models.
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
中文LLaMA&Alpaca大语言模型 本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
Official repo for consistency models.
Alibaba Java Diagnostic Tool Arthas/Alibaba Java诊断利器Arthas
Stable Diffusion web UI
Making large AI models cheaper, faster and more accessible
High-Resolution Image Synthesis with Latent Diffusion Models