Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add back clipping of splits for learning curves to target durations #773

Open
NickleDave opened this issue Aug 27, 2024 · 1 comment

Comments

@NickleDave
Copy link
Collaborator

When we make splits now for learning curves, we do not clip the vectors we use for bookkeeping so that they are a consistent length, i.e., we do not train on fixed durations

This logic lived on the WindowDataset class in version 0.x, the crop_spect_vectors_keep_classes label: https://github.com/vocalpy/vak/blob/0.8/src/vak/datasets/window_dataset.py#L246

I just rewrote some of this logic for the BioSoundSegBench dataset, here:
vocalpy/CMACBench@f8a6b28

In doing so I realized that the duration as measured in seconds of audio can differ from the duration as measured in number of spectrogram time bins, and that this difference varies depending on the method used to compute the spectrogram

I ended up using some hacks so that we get indexing vectors of (mostly) consistent lengths.
But it's annoyingly fragile.

Probably the better way to do this from first principles is to clip the audio in such a way that we get the target duration in seconds--while keeping all classes present in the dataset--and then let the spectrogram code do whatever it wants

@NickleDave
Copy link
Collaborator Author

Probably the better way to do this from first principles is to vocalpy/vocalpy#149 the audio in such a way that we get the target duration in seconds--while keeping all classes present in the dataset--and then let the spectrogram code do whatever it wants

A question here is whether we want to copy the audio to the prepared dataset,
Especially if we clip it, I would want to save the audio we clipped along with metadata about the source audio that produced the clip.
But the trade-off here is that this increases the size of the dataset. So we make it an option specific to learning curves, and don't do it by default probably

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant