Using a combination of audio STFT, MFCC, and chroma features, with a Transformer model for timbre feature modeling and high-level abstraction, this approach can maximize the avoidance of overfitting and underfitting problems compared to using a single feature, and has better generalization capabilities. With a small amount of data and minimal training, it can achieve better results.
The model begins by processing the audio signal through a U-Net, which isolates the vocal track. The vocal track is then simultaneously fed into PitchNet and HuBERT (Wav2Vec2). PitchNet is responsible for extracting pitch features, while HuBERT captures detailed features of the vocals.
The core of the model is CombineNet, which receives features from the Features module. This module consists of three spectrograms: STFT, MFCC, and Chroma, each extracting different aspects of the audio. These features are enhanced by the TimbreBlock before being passed to the Encoder. During this process, noise is introduced via STFT transformation and combined with the features before entering the Encoder for processing. The processed features are then passed to the Decoder, where they are combined with the previous features to generate the final audio output.
CombineNet is based on an encoder-decoder architecture and is trained to generate a mask that is used to extract and replace the timbre, ultimately producing the final output audio.