YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Prefered System :: NVIDIA RTX A4000
Avoid Servers
- RTX 5060 Ti
- Tesla P100
β‘ Quick Start + π Miniconda & Conda Environment + π Setup NGROK
ο ο¦ main
sudo apt update && sudo apt upgrade -y
sudo apt install -y iproute2 libgl1 nano wget unzip nvtop git git-lfs build-essential cmake \
libopenblas-dev liblapack-dev libx11-dev libgtk-3-dev libglib2.0-0
git config --global credential.helper store
git clone -b main https://huggingface.co/HawkEyesAI/v2_HE_Universe
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3
export PATH="$HOME/miniconda3/bin:$PATH"
conda init
source ~/.bashrc
conda create --name HE python=3.11 -y
conda activate HE
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list
sudo apt update
sudo apt install -y ngrok
ngrok config add-authtoken 2lPN9d5cdnGlSrWb4JGEGVI1Mah_4bvvrGdKKU2ME7nkck8L7
Optional ngrok exposure:
ngrok http --domain=batnlp.ngrok.app 5656
π¦ Python Packages
pip install --upgrade pip
pip install jupyter pandas openpyxl
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
cd v2_HE_Universe
Install all dependencies
pip install -r requirements.txt
pip install faster-whisper soundfile pydub speechbrain \
huggingface_hub==0.16.4 einops lightning lightning_utilities torchmetrics primePy
pip install mtcnn opentelemetry-exporter-otlp
pip install --no-deps facenet-pytorch scikit-learn rich pandas matplotlib tensorboard pyannote.audio threadpoolctl torchcodec safetensors pyannote.metrics colorlog pyannote.pipeline optuna pyannote.core pyannote.database \
sortedcontainers torch-audiomentations julius torch-pitch-shift \
opentelemetry-instrumentation opentelemetry-api opentelemetry-distro
pip install asteroid-filterbanks python-dotenv
pip install --no-deps noisereduce lazy_loader audioread soxr numba llvmlite
opentelemetry-bootstrap -a install
# pip install -U pyannote.audio
python get_nltk_data.py
π Audio Backend
sed -i 's/available_backends = .*/available_backends = ["sox_io", "soundfile"]/' \
$CONDA_PREFIX/lib/python3.11/site-packages/speechbrain/utils/torch_audio_backend.py
π§ Patch face_recognition_models
python - <<'EOF'
import sys, subprocess, warnings, pathlib, importlib.util, time
def install_pkg(pkg):
subprocess.run([sys.executable, "-m", "pip", "install", pkg], check=True)
try:
if importlib.util.find_spec("face_recognition_models") is None:
print("π¦ Installing missing package: face_recognition_models ...")
install_pkg("face_recognition_models")
time.sleep(2)
import face_recognition_models
file_path = pathlib.Path(face_recognition_models.__file__)
content = file_path.read_text()
if "pkg_resources" in content:
print("π©Ή Applying safe patch for face_recognition_models...")
new_import = (
"import importlib.resources as resources\n"
"def resource_filename(package_or_requirement, resource_name):\n"
" return str(resources.files(package_or_requirement).joinpath(resource_name))\n"
)
patched = []
for line in content.splitlines():
if "pkg_resources" in line and "import" in line:
patched.append(new_import)
else:
patched.append(line)
file_path.write_text("\n".join(patched))
print("β
Safe patch applied successfully.")
else:
print("β
face_recognition_models already safe or patched.")
except Exception as e:
warnings.warn(f"β οΈ face_recognition_models patch failed: {e}")
EOF
π HuggingFace Hub Login & Token
pip install huggingface_hub==0.16.4
huggingface-cli login
π₯ Fix cuDNN Path for GPU
export LD_LIBRARY_PATH=/venv/HE/lib/python3.11/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH
pip uninstall polars -y
pip install "polars[rtcompat]"
python Universe_API.py
Or permanently add to ~/.zshrc:
export LD_LIBRARY_PATH="$HOME/miniconda3/envs/www/lib/python3.11/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH"
source ~/.zshrc
pip uninstall polars -y
pip install "polars[rtcompat]"
python Universe_API.py
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support