File size: 4,895 Bytes
97ae579
 
 
 
 
 
 
 
 
 
af4db17
97ae579
 
 
 
 
 
 
 
 
 
af4db17
 
 
 
 
 
 
 
97ae579
 
 
 
 
af4db17
 
97ae579
 
af4db17
97ae579
 
 
 
 
 
 
 
 
 
 
 
af4db17
97ae579
 
 
 
 
 
 
 
 
af4db17
97ae579
 
 
 
 
af4db17
97ae579
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- tarudesu/ViHealthQA
license: mit
---

# nampham1106/bkcare-embedding

This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

<!--- Describe your model here -->

## Usage (Sentence-Transformers)

### Installation <a name="install1"></a>
 -  Install `sentence-transformers`:
	
	- `pip install -U sentence-transformers`
	
 - Install `pyvi` to word segment:
	- `pip install pyvi`
### Example usage <a name="usage1"></a>

Then you can use the model like this:

```python
from sentence_transformers import SentenceTransformer
from pyvi.ViTokenizer import tokenize
sentences = ["Đang chích ngừa viêm gan B có chích ngừa Covid-19 được không?", "Nếu anh / chị đang tiêm ngừa vaccine phòng_bệnh viêm_gan B , anh / chị vẫn có_thể tiêm phòng vaccine phòng Covid-19 , tuy_nhiên vaccine Covid-19 phải được tiêm cách trước và sau mũi vaccine viêm gan B tối_thiểu là 14 ngày ."]

model = SentenceTransformer('nampham1106/bkcare-embedding')
sentences = [tokenize(sentence) for sentence in sentences]
embeddings = model.encode(sentences)
print(embeddings)
```



## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch
from pyvi.ViTokenizer import tokenize

#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ["Đang chích ngừa viêm gan B có chích ngừa Covid-19 được không?", "Nếu anh / chị đang tiêm ngừa vaccine phòng_bệnh viêm_gan B , anh / chị vẫn có_thể tiêm phòng vaccine phòng Covid-19 , tuy_nhiên vaccine Covid-19 phải được tiêm cách trước và sau mũi vaccine viêm gan B tối_thiểu là 14 ngày ."]

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nampham1106/bkcare-embedding')
model = AutoModel.from_pretrained('nampham1106/bkcare-embedding')

sentences = [tokenize(sentence) for sentence in sentences]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)
```



## Evaluation Results

<!--- Describe how your model was evaluated -->

For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nampham1106/bkcare-embedding)


## Training
The model was trained with the parameters:

**DataLoader**:

`torch.utils.data.dataloader.DataLoader` of length 307 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```

**Loss**:

`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
  ```
  {'scale': 20.0, 'similarity_fct': 'cos_sim'}
  ```

Parameters of the fit()-Method:
```
{
    "epochs": 15,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 100,
    "weight_decay": 0.01
}
```


## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```

## Citing & Authors

<!--- Describe where people can find more information -->