You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Prefered System :: NVIDIA RTX A4000

Avoid Servers

  • RTX 5060 Ti
  • Tesla P100

⚑ Quick Start + 🐍 Miniconda & Conda Environment + πŸ›  Setup NGROK

  main

sudo apt update && sudo apt upgrade -y
sudo apt install -y iproute2 libgl1 nano wget unzip nvtop git git-lfs build-essential cmake \
libopenblas-dev liblapack-dev libx11-dev libgtk-3-dev libglib2.0-0

git config --global credential.helper store
git clone -b main https://huggingface.co/HawkEyesAI/v2_HE_Universe

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3
export PATH="$HOME/miniconda3/bin:$PATH"
conda init
source ~/.bashrc
conda create --name HE python=3.11 -y
conda activate HE

curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list
sudo apt update
sudo apt install -y ngrok
ngrok config add-authtoken 2lPN9d5cdnGlSrWb4JGEGVI1Mah_4bvvrGdKKU2ME7nkck8L7

Optional ngrok exposure:

ngrok http --domain=batnlp.ngrok.app 5656

πŸ“¦ Python Packages

pip install --upgrade pip
pip install jupyter pandas openpyxl

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

cd v2_HE_Universe

Install all dependencies

pip install -r requirements.txt

pip install faster-whisper soundfile pydub speechbrain \
huggingface_hub==0.16.4 einops lightning lightning_utilities torchmetrics primePy

pip install mtcnn opentelemetry-exporter-otlp

pip install --no-deps facenet-pytorch scikit-learn rich pandas matplotlib tensorboard pyannote.audio threadpoolctl torchcodec safetensors pyannote.metrics colorlog pyannote.pipeline optuna pyannote.core pyannote.database \
sortedcontainers torch-audiomentations julius torch-pitch-shift \
opentelemetry-instrumentation opentelemetry-api opentelemetry-distro

pip install asteroid-filterbanks python-dotenv


pip install --no-deps noisereduce lazy_loader audioread soxr numba llvmlite


opentelemetry-bootstrap -a install


# pip install -U pyannote.audio


python get_nltk_data.py

πŸ›  Audio Backend

sed -i 's/available_backends = .*/available_backends = ["sox_io", "soundfile"]/' \
$CONDA_PREFIX/lib/python3.11/site-packages/speechbrain/utils/torch_audio_backend.py

πŸ”§ Patch face_recognition_models

python - <<'EOF'
import sys, subprocess, warnings, pathlib, importlib.util, time

def install_pkg(pkg):
    subprocess.run([sys.executable, "-m", "pip", "install", pkg], check=True)

try:
    if importlib.util.find_spec("face_recognition_models") is None:
        print("πŸ“¦ Installing missing package: face_recognition_models ...")
        install_pkg("face_recognition_models")
        time.sleep(2)

    import face_recognition_models
    file_path = pathlib.Path(face_recognition_models.__file__)
    content = file_path.read_text()

    if "pkg_resources" in content:
        print("🩹 Applying safe patch for face_recognition_models...")
        new_import = (
            "import importlib.resources as resources\n"
            "def resource_filename(package_or_requirement, resource_name):\n"
            "    return str(resources.files(package_or_requirement).joinpath(resource_name))\n"
        )
        patched = []
        for line in content.splitlines():
            if "pkg_resources" in line and "import" in line:
                patched.append(new_import)
            else:
                patched.append(line)
        file_path.write_text("\n".join(patched))
        print("βœ… Safe patch applied successfully.")
    else:
        print("βœ… face_recognition_models already safe or patched.")

except Exception as e:
    warnings.warn(f"⚠️ face_recognition_models patch failed: {e}")
EOF

πŸ”‘ HuggingFace Hub Login & Token

pip install huggingface_hub==0.16.4
huggingface-cli login

πŸ–₯ Fix cuDNN Path for GPU

export LD_LIBRARY_PATH=/venv/HE/lib/python3.11/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH
pip uninstall polars -y
pip install "polars[rtcompat]"
python Universe_API.py

Or permanently add to ~/.zshrc:

export LD_LIBRARY_PATH="$HOME/miniconda3/envs/www/lib/python3.11/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH"
source ~/.zshrc
pip uninstall polars -y
pip install "polars[rtcompat]"
python Universe_API.py
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support