|
|
---
|
|
|
license: mit
|
|
|
tags:
|
|
|
- audio
|
|
|
- pytorch
|
|
|
- torchscript
|
|
|
- guitar-amp-simulation
|
|
|
- real-time
|
|
|
inference:
|
|
|
framework: pytorch
|
|
|
task: audio-to-audio
|
|
|
inputs:
|
|
|
- name: input
|
|
|
type: float[]
|
|
|
description: "Input waveform or features (e.g. [batch, channels, samples])"
|
|
|
outputs:
|
|
|
- name: output
|
|
|
type: float[]
|
|
|
description: "Output waveform or processed features"
|
|
|
---
|
|
|
|
|
|
## Usage
|
|
|
This is a model I trained to mimic a JCM 800 AMP. It doesn't sound very good, but as a first pass, I'm glad I have it.
|
|
|
|
|
|

|
|
|
|
|
|
Download [GuneAmp.exe](GuneAmp.exe) and try running your own conversion.
|
|
|
|
|
|
Read my notes [GuneAmpNotes](GuneAmpNotes.pdf)
|
|
|
|
|
|
## Using the TorchScript Model from Hugging Face
|
|
|
|
|
|
If you wish to use the TorchScript version of the model directly, you can download it from Hugging Face and load it using the following Python code.
|
|
|
|
|
|
First, ensure you have the necessary libraries installed:
|
|
|
```bash
|
|
|
pip install torch huggingface_hub
|
|
|
```
|
|
|
|
|
|
Then, use the following Python code to load and use the model:
|
|
|
|
|
|
```python
|
|
|
import torch
|
|
|
from huggingface_hub import hf_hub_download
|
|
|
|
|
|
model_id = 'sgune/gune-amp'
|
|
|
model_filename = 'metal_amp_v2_ts.pt'
|
|
|
|
|
|
model_path = hf_hub_download(repo_id=model_id, filename=model_filename)
|
|
|
|
|
|
#LOAD the model on GPU or CPU
|
|
|
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
|
|
print(f"Loading model on device: {device}")
|
|
|
|
|
|
model = torch.jit.load(model_path, map_location=device)
|
|
|
model.eval()
|
|
|
|
|
|
print("Model loaded successfully!")
|
|
|
|
|
|
input_size = 1024
|
|
|
dummy_input = torch.randn(1, input_size, dtype=torch.float32).to(device)
|
|
|
|
|
|
print(f"Running inference with dummy input of shape: {dummy_input.shape}")
|
|
|
|
|
|
with torch.no_grad(): # Disable gradient calculations for inference
|
|
|
output = model(dummy_input)
|
|
|
|
|
|
print("Inference complete!")
|
|
|
print("Example output shape:", output.shape)
|
|
|
print("Example output values:", output)
|
|
|
```
|
|
|
|
|
|
|
|
|
## COMING SOON
|
|
|
`infer.py`, `model.py`, `train.py` and `config.py` deepdives.
|
|
|
|