niobures commited on
Commit
7df0952
·
verified ·
1 Parent(s): c943119
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ releases/v0.0.1/torchfcpe-0.0.1-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
37
+ releases/v0.0.2/torchfcpe-0.0.2-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
38
+ releases/v0.0.3/torchfcpe-0.0.3-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
39
+ releases/v0.0.4/torchfcpe-0.0.4-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
FCPE.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15f28c2ccc865f87299d323df2fa390ec2112d91ba72c439de8b3be5235016f2
3
+ size 82134543
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">TorchFCPE</h1>
2
+
3
+ ## Overview
4
+
5
+ TorchFCPE(Fast Context-based Pitch Estimation) is a PyTorch-based library designed for audio pitch extraction and MIDI conversion. This README provides a quick guide on how to use the library for audio pitch inference and MIDI extraction.
6
+
7
+ Note: that the MIDI extractor of FCPE is quantized from f0 using non neural network methods
8
+
9
+ Note: I won't be updating FCPE (or benchmark) so soon, but I will definitely release a version with cleaned-up code by no later than next year.
10
+
11
+ ## Installation
12
+
13
+ Before using the library, make sure you have the necessary dependencies installed:
14
+
15
+ ```bash
16
+ pip install torchfcpe
17
+ ```
18
+
19
+ ## Usage
20
+
21
+ ### 1. Audio Pitch Inference
22
+
23
+ ```python
24
+ from torchfcpe import spawn_bundled_infer_model
25
+ import torch
26
+ import librosa
27
+
28
+ # Configure device and target hop size
29
+ device = 'cpu' # or 'cuda' if using a GPU
30
+ sr = 16000 # Sample rate
31
+ hop_size = 160 # Hop size for processing
32
+
33
+ # Load and preprocess audio
34
+ audio, sr = librosa.load('test.wav', sr=sr)
35
+ audio = librosa.to_mono(audio)
36
+ audio_length = len(audio)
37
+ f0_target_length = (audio_length // hop_size) + 1
38
+ audio = torch.from_numpy(audio).float().unsqueeze(0).unsqueeze(-1).to(device)
39
+
40
+ # Load the model
41
+ model = spawn_bundled_infer_model(device=device)
42
+
43
+ # Perform pitch inference
44
+ f0 = model.infer(
45
+ audio,
46
+ sr=sr,
47
+ decoder_mode='local_argmax', # Recommended mode
48
+ threshold=0.006, # Threshold for V/UV decision
49
+ f0_min=80, # Minimum pitch
50
+ f0_max=880, # Maximum pitch
51
+ interp_uv=False, # Interpolate unvoiced frames
52
+ output_interp_target_length=f0_target_length, # Interpolate to target length
53
+ )
54
+
55
+ print(f0)
56
+ ```
57
+
58
+ ### 2. MIDI Extraction
59
+
60
+ ```python
61
+ # Extract MIDI from audio
62
+ midi = model.extact_midi(
63
+ audio,
64
+ sr=sr,
65
+ decoder_mode='local_argmax', # Recommended mode
66
+ threshold=0.006, # Threshold for V/UV decision
67
+ f0_min=80, # Minimum pitch
68
+ f0_max=880, # Maximum pitch
69
+ output_path="test.mid", # Save MIDI to file
70
+ )
71
+
72
+ print(midi)
73
+ ```
74
+
75
+ ### Notes
76
+
77
+ - **Inference Parameters:**
78
+
79
+ - `audio`: Input audio as a `torch.Tensor`.
80
+ - `sr`: Sample rate of the audio.
81
+ - `decoder_mode` (Optional): Mode for decoding, 'local_argmax' is recommended.
82
+ - `threshold` (Optional): Threshold for voice/unvoiced decision; default is 0.006.
83
+ - `f0_min` (Optional): Minimum pitch value; default is 80 Hz.
84
+ - `f0_max` (Optional): Maximum pitch value; default is 880 Hz.
85
+ - `interp_uv` (Optional): Whether to interpolate unvoiced frames; default is False.
86
+ - `output_interp_target_length` (Optional): Length to which the output pitch should be interpolated.
87
+
88
+ - **MIDI Extraction Parameters:**
89
+ - `audio`: Input audio as a `torch.Tensor`.
90
+ - `sr`: Sample rate of the audio.
91
+ - `decoder_mode` (Optional): Mode for decoding; 'local_argmax' is recommended.
92
+ - `threshold` (Optional): Threshold for voice/unvoiced decision; default is 0.006.
93
+ - `f0_min` (Optional): Minimum pitch value; default is 80 Hz.
94
+ - `f0_max` (Optional): Maximum pitch value; default is 880 Hz.
95
+ - `output_path` (Optional): File path to save the MIDI file. If not provided, only returns the MIDI structure.
96
+ - `tempo` (Optional): BPM for the MIDI file. If None, BPM is automatically predicted.
97
+
98
+ ## Additional Features
99
+
100
+ - **Model as a PyTorch Module:**
101
+ You can use the model as a standard PyTorch module. For example:
102
+
103
+ ```python
104
+ # Change device
105
+ model = model.to(device)
106
+
107
+ # Compile model
108
+ model = torch.compile(model)
109
+ ```
110
+
111
+ ## Paper
112
+
113
+ If you find our work useful, please consider citing the paper:
114
+
115
+ ```
116
+ @misc{luo2025fcpefastcontextbasedpitch,
117
+ title={FCPE: A Fast Context-based Pitch Estimation Model},
118
+ author={Yuxin Luo and Ruoyi Zhang and Lu-Chuan Liu and Tianyu Li and Hangyu Liu},
119
+ year={2025},
120
+ eprint={2509.15140},
121
+ archivePrefix={arXiv},
122
+ primaryClass={cs.SD},
123
+ url={https://arxiv.org/abs/2509.15140},
124
+ }
125
+ ```
126
+
127
+ ### Important details
128
+
129
+ The model we use in our paper is DDSP-200K, you can get the model from here: [DDSP-200K Model](https://huggingface.co/ChiTu/FCPE/tree/main).
130
+
131
+ And there's another model which released earlier, you can get it from here [FCPE-Previous](/torchfcpe/assets/fcpe_c_v001.pt).
132
+
133
+ More information about experiments will be released after the paper is accepted or rejected.
model/.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
model/DDSP_200k.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8544427eebbf2baef6213cc9a05057e46961617a8e5bd96975a0d42da6a09059
3
+ size 43362881
model/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ ---
releases/v0.0.1/FCPE-0.0.1.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924520da854e1c4c2ca44309c81b48e5ed219cb1616a3821b3ad1f452957d332
3
+ size 40236269
releases/v0.0.1/torchfcpe-0.0.1-py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9908abf3929e049e460f9d53f9108cb433249126de92039e3d74ade64473cba7
3
+ size 40219623
releases/v0.0.2/FCPE-0.0.2.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44bb35cacafe4c66ff174cd150f5f9202f9262e22d79faae5afc82fbe4d644b4
3
+ size 40236537
releases/v0.0.2/torchfcpe-0.0.2-py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d188dec17423cf483c4eb435ff721dc7573b82c959d6af6b552cd36332d5a875
3
+ size 40219715
releases/v0.0.3/FCPE-0.0.3.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef5bb0fb924c81efcb4722852b3e5d45df034ae15ebda8a7c5f934d2abbc8a93
3
+ size 40238649
releases/v0.0.3/torchfcpe-0.0.3-py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c92c2a3316b7dd71c24f8edfccc063bc0f1413b8ba3f9e556be9613db4d24fc1
3
+ size 40220171
releases/v0.0.4/FCPE-0.0.4.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9df5af09d467bf3eacc0ddb19dea055fd7695fa3c2c093b4d0cafd7a4562f896
3
+ size 40240639
releases/v0.0.4/torchfcpe-0.0.4-py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f042c463d850d76c6f4899a0b84f0b694bb560adf05f4de951097a756d17472d
3
+ size 40222012