File size: 2,027 Bytes
9cab8da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d26b604
97bbca2
 
a4198ef
 
4922b62
a4198ef
4922b62
97bbca2
e6879e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97bbca2
 
e6879e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---

license: mit
tags:
- audio
- pytorch
- torchscript
- guitar-amp-simulation
- real-time
inference:
  framework: pytorch
  task: audio-to-audio
  inputs:
    - name: input
      type: float[]
      description: "Input waveform or features (e.g. [batch, channels, samples])"
  outputs:
    - name: output
      type: float[]
      description: "Output waveform or processed features"
---


## Usage
This is a model I trained to mimic a JCM 800 AMP. It doesn't sound very good, but as a first pass, I'm glad I have it.

![](infer.PNG)

Download [GuneAmp.exe](GuneAmp.exe) and try running your own conversion.

Read my notes [GuneAmpNotes](GuneAmpNotes.pdf)

## Using the TorchScript Model from Hugging Face

If you wish to use the TorchScript version of the model directly, you can download it from Hugging Face and load it using the following Python code.

First, ensure you have the necessary libraries installed:
```bash

pip install torch huggingface_hub

```

Then, use the following Python code to load and use the model:

```python

import torch

from huggingface_hub import hf_hub_download



model_id = 'sgune/gune-amp'

model_filename = 'metal_amp_v2_ts.pt'



model_path = hf_hub_download(repo_id=model_id, filename=model_filename)



#LOAD the model on GPU or CPU

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

print(f"Loading model on device: {device}")



model = torch.jit.load(model_path, map_location=device)

model.eval()



print("Model loaded successfully!")



input_size = 1024

dummy_input = torch.randn(1, input_size, dtype=torch.float32).to(device)



print(f"Running inference with dummy input of shape: {dummy_input.shape}")



with torch.no_grad(): # Disable gradient calculations for inference

    output = model(dummy_input)



print("Inference complete!")

print("Example output shape:", output.shape)

print("Example output values:", output)

```


## COMING SOON
`infer.py`, `model.py`, `train.py` and `config.py` deepdives.