You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I tried to finetune i3d on ucf-101 using pretrained models in checkpoint. But the accuracy after 10 epochs is very low (30%) on validation set. I replaced logits layer of i3d and freeze other layers. Here is my script:
python -u train.py
--dataset ucf101
--model i3d
--video_path $DATASET_ROOT/jpg
--annotation_path $DATASET_ROOT/ucfTrainTestlist/ucf101_02.json
--batch_size 6
--num_classes 400
--finetune_num_classes 101
--spatial_size 224
--sample_duration 64
--learning_rate 1e-2
--save_dir $OUTPUT_DIR
--dropout_keep_prob 0.5
--checkpoint_path checkpoints/I3D/rgb_imagenet.pth
--finetune_prefixes logits
--num_scales 1 \
Can you please give me some advice on finetuning on ucf-101?
Thanks!
The text was updated successfully, but these errors were encountered:
What other layers are you freezing? I replaced the logits layer but did not freeze any layers and fine tuning for 10 epochs gives about 90% accuracy on the training set and test set of UCF101_01. I have a similar batch size / learning rate too and I used rgb_imagenet.pth as my checkpoint.
Hi, I tried to finetune i3d on ucf-101 using pretrained models in checkpoint. But the accuracy after 10 epochs is very low (30%) on validation set. I replaced logits layer of i3d and freeze other layers. Here is my script:
python -u train.py
--dataset ucf101
--model i3d
--video_path $DATASET_ROOT/jpg
--annotation_path $DATASET_ROOT/ucfTrainTestlist/ucf101_02.json
--batch_size 6
--num_classes 400
--finetune_num_classes 101
--spatial_size 224
--sample_duration 64
--learning_rate 1e-2
--save_dir $OUTPUT_DIR
--dropout_keep_prob 0.5
--checkpoint_path checkpoints/I3D/rgb_imagenet.pth
--finetune_prefixes logits
--num_scales 1 \
Can you please give me some advice on finetuning on ucf-101?
Thanks!
The text was updated successfully, but these errors were encountered: