emonet.wav_splitter module#

Module for splitting existing wavs into smaller files. Assumes m4ato_wav.py and data_prep.py have already been run, as it requires existing wav files and training manifests.

emonet.wav_splitter.split_sample(signal: torch.Tensor, n_seconds: int, sample_rate: int = 16000, dim: int = - 1) List[source]#

Split an audio signal into multiple n-second chunks.

Discards chunks with duration < n_seconds.

Parameters
  • signal (torch.Tensor) – Audio signal.

  • n_seconds (int) – Desired duration of sample chunks.

  • sample_rate (int) – Audio signal sample rate; default 16000

  • dim (int) – Dimension to split sample on; defaults to last.

Returns

List[torch.Tensor] – List of equal-length audio samples.

emonet.wav_splitter.split_files(ds: str = 'train', therapist: Optional[str] = None) None[source]#

Split multiple samples in large batch.

Iterates through therapist-specific dataset manifest to read in respective audio files as tensors, apply splits, and write splits to .wav files. Creates new split manifests, complete with original metadata.

Parameters
  • ds (str) – Dataset to run batch operation; should be one of {train, valid, split}.

  • therapist (str) – Optional therapist flag.

Returns

None – Output written to respective .wav files; manifest written to respective .json files.

emonet.wav_splitter.split_files_therapist(ds: str = 'train') None[source]#

Split multiple samples in large batch.

Iterates through therapist-specific dataset manifest to read in respective audio files as tensors, apply splits, and write splits to .wav files. Creates new split manifests, complete with original metadata.

Parameters

ds (str) – Dataset to run batch operation; should be one of {train, valid, split}.

Returns

None – Output written to respective .wav files; manifest written to respective .json files.

emonet.wav_splitter.main()[source]#

Splite all files for train/valid/test datasets.