23f2001106 commited on
Commit
fcca64e
·
1 Parent(s): bb389e6

Updated README

Browse files
Files changed (1) hide show
  1. README.md +29 -23
README.md CHANGED
@@ -27,47 +27,53 @@ tags:
27
 
28
  ## Custom Model Class
29
 
30
- This model uses a custom architecture implemented in `model.py`, specifically the class:
31
- ```
32
- BERT_FFNN
33
- ```
34
- If you want to load this model locally or fine-tune it further, make sure you have `model.py` in your working directory or import it correctly.
 
 
35
 
36
  ## Installation
37
  ```bash
38
- pip install torch transformers
39
  ```
40
 
41
  ## Usage
42
 
43
  ```python
44
- from transformers import AutoTokenizer
45
  import torch
46
- from model import BERT_FFNN
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  # Load tokenizer
49
  tokenizer = AutoTokenizer.from_pretrained("NeuralNest05/emo-detector")
50
 
51
- # Load model
52
- config = {
53
- "bert_model_name": "microsoft/deberta-v3-base",
54
- "hidden_dims": [192, 96],
55
- "output_dim": 5,
56
- "dropout": 0.2,
57
- "pooling": "attention",
58
- "freeze_bert": False,
59
- "freeze_layers": 0,
60
- "use_layer_norm": True
61
- }
62
- model = BERT_FFNN(**config)
63
- model_path = hf_hub_download(repo_id="NeuralNest05/emo-detector", filename="pytorch_model.bin")
64
- model.load_state_dict(torch.load(model_path, map_location=DEVICE))
65
  model.to(DEVICE)
66
  model.eval()
67
 
68
  # Example prediction
69
  texts = ["I am very happy today!", "This is scary..."]
70
- encodings = tokenizer(texts, truncation=True, padding=True, return_tensors="pt")
 
71
  with torch.no_grad():
72
  logits = model(**encodings)
73
  probs = torch.sigmoid(logits)
 
27
 
28
  ## Custom Model Class
29
 
30
+ This model uses a custom architecture defined inside the `emo_detector/` module:
31
+
32
+ - `emo_detector/configuration_bert_ffnn.py` → `BertFFNNConfig`
33
+ - `emo_detector/modeling_bert_ffnn.py` → `BERT_FFNN`
34
+
35
+ To load or fine-tune this model, you must download the full repository (including the `emo_detector/` folder).
36
+ The recommended way is to use `snapshot_download()` from Hugging Face Hub.
37
 
38
  ## Installation
39
  ```bash
40
+ pip install torch transformers huggingface_hub
41
  ```
42
 
43
  ## Usage
44
 
45
  ```python
46
+ import sys
47
  import torch
48
+ from transformers import AutoTokenizer
49
+ from huggingface_hub import snapshot_download
50
+
51
+ # Download entire repository
52
+ repo_dir = snapshot_download("NeuralNest05/emo-detector")
53
+ sys.path.append(repo_dir)
54
+
55
+ # Import custom architecture + config
56
+ from emo_detector.configuration_bert_ffnn import BertFFNNConfig
57
+ from emo_detector.modeling_bert_ffnn import BERT_FFNN
58
+
59
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
60
 
61
  # Load tokenizer
62
  tokenizer = AutoTokenizer.from_pretrained("NeuralNest05/emo-detector")
63
 
64
+ # Load model config and architecture
65
+ config = BertFFNNConfig.from_pretrained("NeuralNest05/emo-detector")
66
+ model = BERT_FFNN(config)
67
+
68
+ # Load weights
69
+ model.load_state_dict(torch.load(f"{repo_dir}/pytorch_model.bin", map_location=DEVICE))
 
 
 
 
 
 
 
 
70
  model.to(DEVICE)
71
  model.eval()
72
 
73
  # Example prediction
74
  texts = ["I am very happy today!", "This is scary..."]
75
+ encodings = tokenizer(texts, truncation=True, padding=True, return_tensors="pt").to(DEVICE)
76
+
77
  with torch.no_grad():
78
  logits = model(**encodings)
79
  probs = torch.sigmoid(logits)