You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly suited for the creation of agents that realistically emulate a specific human target.
Developed a sophisticated machine learning model capable of generating diverse interview questions aligned with specific topics, ensuring depth of conversation. Integrated advanced Natural Language Processing (NLP) algorithms to analyse spoken responses, identifying grammatical errors & offering accurate corrections after the interview.
Natural Language Processing (NLP) and Large Language Models (LLM) with Fine-Tuning LLM and make Chatbot Question answering (QA) with LoRA and Flan-T5 Large
For conceptual understanding you can refer my medium blog which will provide you in-depth knowledge of SVC along with various other factors required in data science.
A fine-tuned version of Phi-3-mini-4k-instruct for generating SQL queries from natural language prompts, utilizing synthetic datasets and QLoRA for efficient adaptation and deployment.
This project is designed to classify YouTube comments as toxic or non-toxic using BERT (Bidirectional Encoder Representations from Transformers). By fine-tuning a pre-trained BERT model, we leverage state-of-the-art NLP capabilities to identify harmful content in online conversations.