sbapan41's picture
Update README.md
9bfdfc8 verified
---
license: apache-2.0
---
# Qhash-v0.1
<div align="center">
<img src="https://huggingface.co/datasets/Quantamhash/Assets/resolve/main/images/dark_logo.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
<div align="center">
<img src="https://huggingface.co/datasets/Quantamhash/Assets/resolve/main/images/Qhash.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
---
This repository contains the speaker embedding models for our Qhash-v0.1 [transformer](Quantamhash/Qhash-v0.1-transformer) and [hybrid](Quantamhash/Qhash-v0.1-hybrid) models.
The speaker embedding models are based on the [ResNet293-SimAM-ASP](https://github.com/VoxBlink2/ScriptsForVoxBlink2/tree/main/asv) models from VoxBlink2. We use the pretrain models as we found the finetunes performed worse.
The output of the speaker embedding model is then passed through an LDA layer and compressed from 256 to 128 dimensions to remove further spurious information about the speaker embedding clip before being fed into the Qhash models.