You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work! Recently, I have been using the pretrain script and checkpoint you provided to perform post-pretraining on my dataset. My goal is vector retrieval. I have a few questions I'd like to ask:
I noticed that the performance of the text embeddings is relatively lower compared to other llm baselines, but it offers faster inference speed. Do you think replacing the text model (which also means discarding the checkpoint) would be beneficial? Would the code modifications required for the replacement be significant?
Using the pretrain script, I have currently frozen the ITG loss. For my task, do you think it is necessary to include the ITG loss? If I include some images and image descriptions generated by other large models (e.g., 4o) and combine them with the ITG loss, could this also benefit vector retrieval? Perhaps I should include the ITG loss in the early stages of training and freeze it later?
The text was updated successfully, but these errors were encountered:
Thank you for your excellent work! Recently, I have been using the pretrain script and checkpoint you provided to perform post-pretraining on my dataset. My goal is vector retrieval. I have a few questions I'd like to ask:
The text was updated successfully, but these errors were encountered: