audio_model / README.md
Simma7's picture
Update README.md
7d2310f verified
---
license: mit
language: en
pipeline_tag: audio-classification
library_name: transformers
tags:
- deepfake
- audio
- wav2vec2
- pytorch
---
# πŸ”Š Deepfake Audio Detection Model
## πŸ“Œ Overview
This model detects whether an audio file is **REAL or FAKE (AI-generated voice)**.
It is based on **Wav2Vec2 architecture** and uses transformer-based audio embeddings.
---
## 🎯 Task
Binary Classification:
- 0 β†’ REAL AUDIO
- 1 β†’ FAKE AUDIO
---
## πŸ“₯ Input
- Audio file (.wav)
- Sampling rate: 16kHz
---
## πŸ“€ Output
- Fake probability (0 to 1)
---
## βš™οΈ Model Files
- pytorch_model.bin
- config.json
- preprocessor_config.json
- tokenizer files
---
## πŸš€ Usage
```python
from transformers import AutoProcessor, AutoModel
import librosa
import torch
processor = AutoProcessor.from_pretrained("Simma7/audio_model")
model = AutoModel.from_pretrained("Simma7/audio_model")
audio, sr = librosa.load("test.wav", sr=16000)
inputs = processor(audio, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1)
prob = torch.sigmoid(embedding.mean()).item()
print(prob)