Fix: Define missing audio input and required imports in example code

#3
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -9,6 +9,8 @@ This repo brings fairseq ContentVec model to HuggingFace Transformers.
9
  ## How to use
10
  To use this model, you need to define
11
  ```python
 
 
12
  class HubertModelWithFinalProj(HubertModel):
13
  def __init__(self, config):
14
  super().__init__(config)
@@ -21,6 +23,8 @@ class HubertModelWithFinalProj(HubertModel):
21
 
22
  and then load the model with
23
  ```python
 
 
24
  model = HubertModelWithFinalProj.from_pretrained("lengyue233/content-vec-best")
25
 
26
  x = model(audio)["last_hidden_state"]
 
9
  ## How to use
10
  To use this model, you need to define
11
  ```python
12
+ from transformers import HubertModel
13
+ import torch.nn as nn
14
  class HubertModelWithFinalProj(HubertModel):
15
  def __init__(self, config):
16
  super().__init__(config)
 
23
 
24
  and then load the model with
25
  ```python
26
+ audio = torch.randn(1, 16000)
27
+
28
  model = HubertModelWithFinalProj.from_pretrained("lengyue233/content-vec-best")
29
 
30
  x = model(audio)["last_hidden_state"]