-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Add back clipping of splits for learning curves to target durations #773
Comments
A question here is whether we want to copy the audio to the prepared dataset, |
When we make splits now for learning curves, we do not clip the vectors we use for bookkeeping so that they are a consistent length, i.e., we do not train on fixed durations
This logic lived on the WindowDataset class in version 0.x, the
crop_spect_vectors_keep_classes
label: https://github.com/vocalpy/vak/blob/0.8/src/vak/datasets/window_dataset.py#L246I just rewrote some of this logic for the BioSoundSegBench dataset, here:
vocalpy/CMACBench@f8a6b28
In doing so I realized that the duration as measured in seconds of audio can differ from the duration as measured in number of spectrogram time bins, and that this difference varies depending on the method used to compute the spectrogram
I ended up using some hacks so that we get indexing vectors of (mostly) consistent lengths.
But it's annoyingly fragile.
Probably the better way to do this from first principles is to clip the audio in such a way that we get the target duration in seconds--while keeping all classes present in the dataset--and then let the spectrogram code do whatever it wants
The text was updated successfully, but these errors were encountered: