You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When asking the model to predict true/false or MCQ, a confidence score and probability distribution are not enough for the final decision. For example, if the model predicts confidence_score / probability_distribution = [0.9, 0.06, 0, 0.04] for an MCQ with four choices, how should we determine the final output?
Describe the solution you"d like
Can we simply set the threshold to 0.5 to determine whether the model is confident or not?
Additional context
I read the blog (https://www.refuel.ai/blog-posts/labeling-with-confidence) and I am curious whether you used a threshold and how you solve the threshold problem in generating the final decisions and the AUROC plot. Thanks.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
When asking the model to predict true/false or MCQ, a confidence score and probability distribution are not enough for the final decision. For example, if the model predicts confidence_score / probability_distribution = [0.9, 0.06, 0, 0.04] for an MCQ with four choices, how should we determine the final output?
Describe the solution you"d like
Can we simply set the threshold to 0.5 to determine whether the model is confident or not?
Additional context
I read the blog (https://www.refuel.ai/blog-posts/labeling-with-confidence) and I am curious whether you used a threshold and how you solve the threshold problem in generating the final decisions and the AUROC plot. Thanks.
The text was updated successfully, but these errors were encountered: