-
University of California, San Diego
- United States of America
-
21:54
(UTC -07:00) - https://www.kaggle.com/lizhecheng
Stars
One click templates for inferencing Language Models
A high-throughput and memory-efficient inference and serving engine for LLMs
ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation
This project is a **proof of concept** that aims to replicate the reasoning capabilities of OpenAI's newly released O1 model.
⏰ AI conference deadline countdowns
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Inference and training library for high-quality TTS models.
Repository for our paper "Frustratingly Easy Jailbreak of Large Language Models via Output Prefix Attacks". https://www.researchsquare.com/article/rs-4385503/latest
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Frontend Diffusion is an end-to-end LLM-powered tool that generates high-quality websites from user sketches.
"Head-to-Tail How Knowledgeable are Large Language Models (LLMs)? A.K.A. Will LLMs Replace Knowledge Graphs?" (NAACL 2024)
[EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
Repository for our paper "DeepEdit: Knowledge Editing as Decoding with Constraints". https://arxiv.org/abs/2401.10471
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
Video code lecture on building nanoGPT from scratch
A generative speech model for daily dialogue.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
customizable template GPT code designed for easy novel architecture experimentation
ReFT: Representation Finetuning for Language Models
Tools for merging pretrained large language models.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/