eustlb HF Staff commited on
Commit
965b429
·
verified ·
1 Parent(s): f44f85f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -34,6 +34,10 @@ A hosted [HuggingFace space](https://huggingface.co/spaces/sesame/csm-1b) is als
34
 
35
  Generate a sentence:
36
 
 
 
 
 
37
  ```python
38
  import torch
39
  from transformers import CsmForConditionalGeneration, AutoProcessor
@@ -64,8 +68,14 @@ audio = model.generate(**inputs, output_audio=True)
64
  processor.save_audio(audio, "example_without_context.wav")
65
  ```
66
 
 
 
67
  CSM sounds best when provided with context:
68
 
 
 
 
 
69
  ```python
70
  import torch
71
  from transformers import CsmForConditionalGeneration, AutoProcessor
@@ -107,6 +117,9 @@ audio = model.generate(**inputs, output_audio=True)
107
  processor.save_audio(audio, "example_with_context.wav")
108
  ```
109
 
 
 
 
110
  ### Batched Inference 📦
111
 
112
  CSM supports batched inference:
@@ -379,6 +392,8 @@ trainer.train()
379
 
380
  </details>
381
 
 
 
382
  ## FAQ
383
 
384
  **Does this model come with any voices?**
 
34
 
35
  Generate a sentence:
36
 
37
+ <details>
38
+
39
+ <summary> code snippet </summary>
40
+
41
  ```python
42
  import torch
43
  from transformers import CsmForConditionalGeneration, AutoProcessor
 
68
  processor.save_audio(audio, "example_without_context.wav")
69
  ```
70
 
71
+ </details>
72
+
73
  CSM sounds best when provided with context:
74
 
75
+ <details>
76
+
77
+ <summary> code snippet </summary>
78
+
79
  ```python
80
  import torch
81
  from transformers import CsmForConditionalGeneration, AutoProcessor
 
117
  processor.save_audio(audio, "example_with_context.wav")
118
  ```
119
 
120
+ </details>
121
+
122
+
123
  ### Batched Inference 📦
124
 
125
  CSM supports batched inference:
 
392
 
393
  </details>
394
 
395
+ ---
396
+
397
  ## FAQ
398
 
399
  **Does this model come with any voices?**