Crosslingual Generalization through Multitask Finetuning
-
Updated
Sep 22, 2024 - Jupyter Notebook
Crosslingual Generalization through Multitask Finetuning
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
Finetuning a small BLOOMZ model (bloomz-560m) on a small dataset and with limited resources.
ChatSakura:Open-source multilingual conversational model.(开源多语言对话大模型)
LLM application: fine tuned model to generate social media posts from technical blogposts. I used the documentation in https://numpy.org/numpy-tutorials/index.html to build a synthetic dataset and used that dataset to fine-tune an open source model.
Research POC on the mitigation of bias in large language models (FLAN-T5 and Bloomz) through fine-tuning.
Add a description, image, and links to the bloomz topic page so that developers can more easily learn about it.
To associate your repository with the bloomz topic, visit your repo's landing page and select "manage topics."