256 Dimension updated
Browse files- 2_Dense/model.safetensors +1 -1
- README.md +44 -44
- model.safetensors +1 -1
2_Dense/model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1049760
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8eadfa9595c8f175d2a5113f17d40d956f408b29cd32aa5e6523dc473034ec2f
|
| 3 |
size 1049760
|
README.md
CHANGED
|
@@ -4,35 +4,35 @@ tags:
|
|
| 4 |
- sentence-similarity
|
| 5 |
- feature-extraction
|
| 6 |
- generated_from_trainer
|
| 7 |
-
- dataset_size:
|
| 8 |
- loss:MultipleNegativesRankingLoss
|
| 9 |
base_model: BAAI/bge-large-en-v1.5
|
| 10 |
widget:
|
| 11 |
-
- source_sentence:
|
| 12 |
sentences:
|
| 13 |
-
-
|
| 14 |
-
-
|
| 15 |
-
-
|
| 16 |
-
- source_sentence:
|
| 17 |
sentences:
|
| 18 |
-
-
|
| 19 |
-
-
|
| 20 |
-
-
|
| 21 |
-
- source_sentence:
|
| 22 |
sentences:
|
| 23 |
-
-
|
| 24 |
-
-
|
| 25 |
-
-
|
| 26 |
-
- source_sentence:
|
| 27 |
sentences:
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
- source_sentence:
|
| 32 |
sentences:
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
pipeline_tag: sentence-similarity
|
| 37 |
library_name: sentence-transformers
|
| 38 |
---
|
|
@@ -87,9 +87,9 @@ from sentence_transformers import SentenceTransformer
|
|
| 87 |
model = SentenceTransformer("foochun/bge-large-finetuned")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
-
'
|
| 91 |
-
'
|
| 92 |
-
'
|
| 93 |
]
|
| 94 |
embeddings = model.encode(sentences)
|
| 95 |
print(embeddings.shape)
|
|
@@ -143,19 +143,19 @@ You can finetune this model on your own dataset.
|
|
| 143 |
|
| 144 |
#### Unnamed Dataset
|
| 145 |
|
| 146 |
-
* Size: 69,
|
| 147 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 148 |
* Approximate statistics based on the first 1000 samples:
|
| 149 |
| | query | pos | neg |
|
| 150 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 151 |
| type | string | string | string |
|
| 152 |
-
| details | <ul><li>min: 4 tokens</li><li>mean: 8.
|
| 153 |
* Samples:
|
| 154 |
-
| query | pos | neg
|
| 155 |
-
|:-----------------------------------|:-------------------------------|:------------------------------
|
| 156 |
-
| <code>
|
| 157 |
-
| <code>
|
| 158 |
-
| <code>
|
| 159 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 160 |
```json
|
| 161 |
{
|
|
@@ -168,19 +168,19 @@ You can finetune this model on your own dataset.
|
|
| 168 |
|
| 169 |
#### Unnamed Dataset
|
| 170 |
|
| 171 |
-
* Size: 9,
|
| 172 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 173 |
* Approximate statistics based on the first 1000 samples:
|
| 174 |
| | query | pos | neg |
|
| 175 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 176 |
| type | string | string | string |
|
| 177 |
-
| details | <ul><li>min: 4 tokens</li><li>mean: 7.
|
| 178 |
* Samples:
|
| 179 |
-
| query
|
| 180 |
-
|:---------------------------------|:-----------------------------|:-----------------------------------|
|
| 181 |
-
| <code>
|
| 182 |
-
| <code>
|
| 183 |
-
| <code>
|
| 184 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 185 |
```json
|
| 186 |
{
|
|
@@ -325,12 +325,12 @@ You can finetune this model on your own dataset.
|
|
| 325 |
### Training Logs
|
| 326 |
| Epoch | Step | Training Loss | Validation Loss |
|
| 327 |
|:----------:|:--------:|:-------------:|:---------------:|
|
| 328 |
-
| 0.
|
| 329 |
-
| 0.
|
| 330 |
-
| 1.
|
| 331 |
-
| 1.
|
| 332 |
-
| 2.
|
| 333 |
-
| **2.
|
| 334 |
|
| 335 |
* The bold row denotes the saved checkpoint.
|
| 336 |
|
|
|
|
| 4 |
- sentence-similarity
|
| 5 |
- feature-extraction
|
| 6 |
- generated_from_trainer
|
| 7 |
+
- dataset_size:69216
|
| 8 |
- loss:MultipleNegativesRankingLoss
|
| 9 |
base_model: BAAI/bge-large-en-v1.5
|
| 10 |
widget:
|
| 11 |
+
- source_sentence: ajith s/o sockalingam
|
| 12 |
sentences:
|
| 13 |
+
- ajith a/l sockalingam
|
| 14 |
+
- marcus ping yi ng
|
| 15 |
+
- ajith a/p sockalingam
|
| 16 |
+
- source_sentence: quinn kwan xin fang
|
| 17 |
sentences:
|
| 18 |
+
- ambiga a/p jacob
|
| 19 |
+
- quinn fang kwan xin
|
| 20 |
+
- xin kwan fang
|
| 21 |
+
- source_sentence: brandon teh min ling
|
| 22 |
sentences:
|
| 23 |
+
- victor bing yong ng
|
| 24 |
+
- min ling teh brandon
|
| 25 |
+
- ling min teh brandon
|
| 26 |
+
- source_sentence: carmen ho xin jun
|
| 27 |
sentences:
|
| 28 |
+
- xin ho jun carmen
|
| 29 |
+
- pei ho yi grace
|
| 30 |
+
- xin jun ho carmen
|
| 31 |
+
- source_sentence: alicia lim siu ling
|
| 32 |
sentences:
|
| 33 |
+
- lim ling siu alicia
|
| 34 |
+
- alicia siu ling lim
|
| 35 |
+
- nadia soh meng jun
|
| 36 |
pipeline_tag: sentence-similarity
|
| 37 |
library_name: sentence-transformers
|
| 38 |
---
|
|
|
|
| 87 |
model = SentenceTransformer("foochun/bge-large-finetuned")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
+
'alicia lim siu ling',
|
| 91 |
+
'alicia siu ling lim',
|
| 92 |
+
'lim ling siu alicia',
|
| 93 |
]
|
| 94 |
embeddings = model.encode(sentences)
|
| 95 |
print(embeddings.shape)
|
|
|
|
| 143 |
|
| 144 |
#### Unnamed Dataset
|
| 145 |
|
| 146 |
+
* Size: 69,216 training samples
|
| 147 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 148 |
* Approximate statistics based on the first 1000 samples:
|
| 149 |
| | query | pos | neg |
|
| 150 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 151 |
| type | string | string | string |
|
| 152 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 8.96 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.22 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.47 tokens</li><li>max: 16 tokens</li></ul> |
|
| 153 |
* Samples:
|
| 154 |
+
| query | pos | neg |
|
| 155 |
+
|:-----------------------------------|:-------------------------------|:------------------------------|
|
| 156 |
+
| <code>abdul karim bin bakar</code> | <code>abdul karim bakar</code> | <code>johan bin hamid</code> |
|
| 157 |
+
| <code>rupai anak jamit</code> | <code>rupai jamit</code> | <code>rupai anak karim</code> |
|
| 158 |
+
| <code>sim kim ning</code> | <code>ning sim kim</code> | <code>kim sim ning</code> |
|
| 159 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 160 |
```json
|
| 161 |
{
|
|
|
|
| 168 |
|
| 169 |
#### Unnamed Dataset
|
| 170 |
|
| 171 |
+
* Size: 9,887 evaluation samples
|
| 172 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 173 |
* Approximate statistics based on the first 1000 samples:
|
| 174 |
| | query | pos | neg |
|
| 175 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 176 |
| type | string | string | string |
|
| 177 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 7.86 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.38 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.65 tokens</li><li>max: 16 tokens</li></ul> |
|
| 178 |
* Samples:
|
| 179 |
+
| query | pos | neg |
|
| 180 |
+
|:------------------------------------|:---------------------------------------|:------------------------------------|
|
| 181 |
+
| <code>mohd ridzuan bin nasir</code> | <code>mohamad ridzuan bin nasir</code> | <code>mohd ridzuan bin naser</code> |
|
| 182 |
+
| <code>isabel koh jun liang</code> | <code>isabel koh jun liang</code> | <code>liang jun koh isabel</code> |
|
| 183 |
+
| <code>neo mei chuan</code> | <code>neo mei chuan</code> | <code>mak mei chuan</code> |
|
| 184 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 185 |
```json
|
| 186 |
{
|
|
|
|
| 325 |
### Training Logs
|
| 326 |
| Epoch | Step | Training Loss | Validation Loss |
|
| 327 |
|:----------:|:--------:|:-------------:|:---------------:|
|
| 328 |
+
| 0.4621 | 500 | 0.1357 | 0.0127 |
|
| 329 |
+
| 0.9242 | 1000 | 0.0149 | 0.0065 |
|
| 330 |
+
| 1.3863 | 1500 | 0.0079 | 0.0065 |
|
| 331 |
+
| 1.8484 | 2000 | 0.0069 | 0.0043 |
|
| 332 |
+
| 2.3105 | 2500 | 0.0059 | 0.0040 |
|
| 333 |
+
| **2.7726** | **3000** | **0.0052** | **0.0039** |
|
| 334 |
|
| 335 |
* The bold row denotes the saved checkpoint.
|
| 336 |
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1340612432
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:15b52f7abf658111d9430675ac14595f44e24a6d62b078f77ee10351c0ce222f
|
| 3 |
size 1340612432
|