dahara1 commited on
Commit
b642558
·
verified ·
1 Parent(s): 1986e03

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -13
README.md CHANGED
@@ -10,13 +10,24 @@ tags:
10
 
11
  # VoiceCore_smoothquant
12
 
13
- [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)をvLLMなどで高速に動かすためにsmoothquant(W8A8)量子化したモデルです
14
- 詳細は元モデルを見てください
15
 
 
 
16
 
17
 
18
  ## Install/Setup
19
 
 
 
 
 
 
 
 
 
 
20
  ```
21
  python3 -m venv VL
22
  source VL/bin/activate
@@ -33,7 +44,6 @@ from transformers import AutoTokenizer
33
  from snac import SNAC
34
  from vllm import LLM, SamplingParams
35
 
36
- # --- 1. 設定項目 ---
37
  QUANTIZED_MODEL_PATH = "webbigdata/VoiceCore_smoothquant"
38
  prompts = [
39
  "テストです",
@@ -41,7 +51,6 @@ prompts = [
41
  ]
42
  chosen_voice = "matsukaze_male[neutral]"
43
 
44
- # --- 2. トークナイザーと入力の準備 ---
45
  print("Loading tokenizer and preparing inputs...")
46
  tokenizer = AutoTokenizer.from_pretrained(QUANTIZED_MODEL_PATH)
47
  prompts_ = [(f"{chosen_voice}: " + p) if chosen_voice else p for p in prompts]
@@ -53,13 +62,12 @@ for prompt in prompts_:
53
  all_prompt_token_ids.append(final_token_ids)
54
  print("Inputs prepared successfully.")
55
 
56
- # --- 3. vLLMモデルの読み込み (GPUで実行) ---
57
  print(f"Loading SmoothQuant model with vLLM from: {QUANTIZED_MODEL_PATH}")
58
  llm = LLM(
59
  model=QUANTIZED_MODEL_PATH,
60
  trust_remote_code=True,
61
- max_model_len=10000, # メモリ不足の場合は削ってください
62
- #gpu_memory_utilization=0.9 # 最大GPUメモリの何割使うか?なので、適宜調整してください
63
  )
64
  sampling_params = SamplingParams(
65
  temperature=0.6,
@@ -70,23 +78,20 @@ sampling_params = SamplingParams(
70
  )
71
  print("vLLM model loaded.")
72
 
73
- # --- 4. vLLMによる推論 ---
74
  print("Generating audio tokens with vLLM...")
75
  outputs = llm.generate(prompt_token_ids=all_prompt_token_ids, sampling_params=sampling_params)
76
  print("Generation complete.")
77
 
78
- # --- 5. SNACデコーダーの準備 (CPUで実行) --- GPUの方が早いがvllmが大きく確保していると失敗するため
79
  print("Loading SNAC decoder to CPU...")
80
  snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
81
- snac_model.to("cpu") # 明示的にCPUに配置
82
  print("SNAC model loaded.")
83
 
84
- # --- 6. 後処理と音声デコード ---
85
  print("Decoding tokens to audio...")
86
  audio_start_token = 128257
87
 
88
  def redistribute_codes(code_list):
89
- """SNACデコーダー用のフォーマットにコードを再構成する関数"""
90
  layer_1, layer_2, layer_3 = [], [], []
91
  for i in range(len(code_list) // 7):
92
  layer_1.append(code_list[7*i])
@@ -120,7 +125,6 @@ for output in outputs:
120
  code_list = [t.item() - 128266 for t in trimmed_row]
121
  code_lists.append(code_list)
122
 
123
- # --- 7. 音声ファイルの保存 ---
124
  for i, code_list in enumerate(code_lists):
125
  if i >= len(prompts): break
126
 
 
10
 
11
  # VoiceCore_smoothquant
12
 
13
+ [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)をvLLMで高速に動かすためにsmoothquant(W8A8)量子化したモデルです
14
+ 詳細は[webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)のモデルカードを御覧ください
15
 
16
+ This is a model quantized using smoothquant (W8A8) to run [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore) at high speed using vLLM.
17
+ See the [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore) model card for details.
18
 
19
 
20
  ## Install/Setup
21
 
22
+ [vLLMはAMDのGPUでも動作する](https://docs.vllm.ai/en/v0.6.5/getting_started/amd-installation.html)そうですがチェックは出来ていません。
23
+ Mac(CPU)でも動くようですが、[gguf版](https://huggingface.co/webbigdata/VoiceCore_gguf)を使った方が早いかもしれません
24
+
25
+ vLLM seems to work with [AMD GPUs](https://docs.vllm.ai/en/v0.6.5/getting_started/amd-installation.html), but I haven't checked.
26
+ It also seems to work with Mac (CPU), but [gguf version](https://huggingface.co/webbigdata/VoiceCore_gguf) seems to be better.
27
+
28
+ 以下はLinuxのNvidia GPU版のセットアップ手順です
29
+ Below are the setup instructions for the Nvidia GPU version of Linux.
30
+
31
  ```
32
  python3 -m venv VL
33
  source VL/bin/activate
 
44
  from snac import SNAC
45
  from vllm import LLM, SamplingParams
46
 
 
47
  QUANTIZED_MODEL_PATH = "webbigdata/VoiceCore_smoothquant"
48
  prompts = [
49
  "テストです",
 
51
  ]
52
  chosen_voice = "matsukaze_male[neutral]"
53
 
 
54
  print("Loading tokenizer and preparing inputs...")
55
  tokenizer = AutoTokenizer.from_pretrained(QUANTIZED_MODEL_PATH)
56
  prompts_ = [(f"{chosen_voice}: " + p) if chosen_voice else p for p in prompts]
 
62
  all_prompt_token_ids.append(final_token_ids)
63
  print("Inputs prepared successfully.")
64
 
 
65
  print(f"Loading SmoothQuant model with vLLM from: {QUANTIZED_MODEL_PATH}")
66
  llm = LLM(
67
  model=QUANTIZED_MODEL_PATH,
68
  trust_remote_code=True,
69
+ max_model_len=10000, # メモリ不足になる場合は減らしてください f you run out of memory, reduce it.
70
+ #gpu_memory_utilization=0.9 # 「最大GPUメモリの何割を使うか?」適宜調整してください "What percentage of the maximum GPU memory should be used?" Adjust accordingly.
71
  )
72
  sampling_params = SamplingParams(
73
  temperature=0.6,
 
78
  )
79
  print("vLLM model loaded.")
80
 
 
81
  print("Generating audio tokens with vLLM...")
82
  outputs = llm.generate(prompt_token_ids=all_prompt_token_ids, sampling_params=sampling_params)
83
  print("Generation complete.")
84
 
85
+ # GPUの方が早いがvllmが大きくメモリ確保していると失敗するため GPU is faster, but if vllm allocates a lot of memory it will fail to run.
86
  print("Loading SNAC decoder to CPU...")
87
  snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
88
+ snac_model.to("cpu")
89
  print("SNAC model loaded.")
90
 
 
91
  print("Decoding tokens to audio...")
92
  audio_start_token = 128257
93
 
94
  def redistribute_codes(code_list):
 
95
  layer_1, layer_2, layer_3 = [], [], []
96
  for i in range(len(code_list) // 7):
97
  layer_1.append(code_list[7*i])
 
125
  code_list = [t.item() - 128266 for t in trimmed_row]
126
  code_lists.append(code_list)
127
 
 
128
  for i, code_list in enumerate(code_lists):
129
  if i >= len(prompts): break
130