Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -5,7 +5,8 @@ This is a model I trained to mimic a JCM 800 AMP. It doesn't sound very good, bu
|
|
| 5 |
|
| 6 |
Download `GuneAmp.exe` and try running your own conversion.
|
| 7 |
|
| 8 |
-
Read my notes
|
|
|
|
| 9 |
|
| 10 |
```python
|
| 11 |
import torch
|
|
@@ -17,6 +18,48 @@ model.load_state_dict(ckpt["model"])
|
|
| 17 |
model.eval()
|
| 18 |
```
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## COMING SOON
|
| 22 |
-
|
|
|
|
| 5 |
|
| 6 |
Download `GuneAmp.exe` and try running your own conversion.
|
| 7 |
|
| 8 |
+
Read my notes `GuneAmp.pdf`
|
| 9 |
+
|
| 10 |
|
| 11 |
```python
|
| 12 |
import torch
|
|
|
|
| 18 |
model.eval()
|
| 19 |
```
|
| 20 |
|
| 21 |
+
## Using the TorchScript Model from Hugging Face
|
| 22 |
+
|
| 23 |
+
If you wish to use the TorchScript version of the model directly, you can download it from Hugging Face and load it using the following Python code.
|
| 24 |
+
|
| 25 |
+
First, ensure you have the necessary libraries installed:
|
| 26 |
+
```bash
|
| 27 |
+
pip install torch huggingface_hub
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
Then, use the following Python code to load and use the model:
|
| 31 |
+
|
| 32 |
+
```python
|
| 33 |
+
import torch
|
| 34 |
+
from huggingface_hub import hf_hub_download
|
| 35 |
+
|
| 36 |
+
model_id = 'sgune/gune-amp'
|
| 37 |
+
model_filename = 'metal_amp_v2_ts.pt'
|
| 38 |
+
|
| 39 |
+
model_path = hf_hub_download(repo_id=model_id, filename=model_filename)
|
| 40 |
+
|
| 41 |
+
#LOAD the model on GPU or CPU
|
| 42 |
+
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
| 43 |
+
print(f"Loading model on device: {device}")
|
| 44 |
+
|
| 45 |
+
model = torch.jit.load(model_path, map_location=device)
|
| 46 |
+
model.eval()
|
| 47 |
+
|
| 48 |
+
print("Model loaded successfully!")
|
| 49 |
+
|
| 50 |
+
input_size = 1024
|
| 51 |
+
dummy_input = torch.randn(1, input_size, dtype=torch.float32).to(device)
|
| 52 |
+
|
| 53 |
+
print(f"Running inference with dummy input of shape: {dummy_input.shape}")
|
| 54 |
+
|
| 55 |
+
with torch.no_grad(): # Disable gradient calculations for inference
|
| 56 |
+
output = model(dummy_input)
|
| 57 |
+
|
| 58 |
+
print("Inference complete!")
|
| 59 |
+
print("Example output shape:", output.shape)
|
| 60 |
+
print("Example output values:", output)
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
|
| 64 |
## COMING SOON
|
| 65 |
+
`infer.py`, `model.py`, `train.py` and `config.py` deepdives.
|
config.py
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
|
| 3 |
+
# Path to your trained model checkpoint
|
| 4 |
+
CHECKPOINT_PATH = "metalampnet_ckpt.pth"
|
| 5 |
+
|
| 6 |
+
# Device to run inference on
|
| 7 |
+
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
|
| 8 |
+
|
| 9 |
+
# Sliding window size and prediction length
|
| 10 |
+
WINDOW = 2048
|
| 11 |
+
PRED_SAMPLES = 512
|
| 12 |
+
|
| 13 |
+
# Output sample rate
|
| 14 |
+
SR = 44100
|