Add new SparseEncoder model
Browse files- README.md +89 -99
- config.json +4 -4
- model.safetensors +2 -2
README.md
CHANGED
|
@@ -8,45 +8,35 @@ tags:
|
|
| 8 |
- sparse
|
| 9 |
- splade
|
| 10 |
- generated_from_trainer
|
| 11 |
-
- dataset_size:
|
| 12 |
- loss:SpladeLoss
|
| 13 |
- loss:SparseMarginMSELoss
|
| 14 |
- loss:FlopsLoss
|
| 15 |
-
base_model: prajjwal1/bert-
|
| 16 |
widget:
|
| 17 |
-
- text:
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
can
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
early by re-sizing the closet to fit the kid.
|
| 41 |
-
- text: About EUS (endoscopic ultrasound). An EUS, or endoscopic ultrasound, is an
|
| 42 |
-
outpatient procedure used to closely examine the tissues in the digestive tract.
|
| 43 |
-
The procedure is done using a standard endoscope and a tiny ultrasound device.The
|
| 44 |
-
ultrasound sensor sends back visual images of the digestive tract to a screen,
|
| 45 |
-
allowing the physician to see deeper into the tissues and the organs beneath the
|
| 46 |
-
surface of the intestines.. In general, an EUS is a very safe procedure. If your
|
| 47 |
-
procedure is being done on the upper GI tract, you may have a sore throat for
|
| 48 |
-
a few days. As a result of the sedation, you should not drive, operate heavy machinery
|
| 49 |
-
or make any important decisions for up to six hours following the procedure.
|
| 50 |
pipeline_tag: feature-extraction
|
| 51 |
library_name: sentence-transformers
|
| 52 |
metrics:
|
|
@@ -70,7 +60,7 @@ metrics:
|
|
| 70 |
- corpus_active_dims
|
| 71 |
- corpus_sparsity_ratio
|
| 72 |
model-index:
|
| 73 |
-
- name: SPLADE-BERT-
|
| 74 |
results:
|
| 75 |
- task:
|
| 76 |
type: sparse-information-retrieval
|
|
@@ -80,72 +70,72 @@ model-index:
|
|
| 80 |
type: unknown
|
| 81 |
metrics:
|
| 82 |
- type: dot_accuracy@1
|
| 83 |
-
value: 0.
|
| 84 |
name: Dot Accuracy@1
|
| 85 |
- type: dot_accuracy@3
|
| 86 |
-
value: 0.
|
| 87 |
name: Dot Accuracy@3
|
| 88 |
- type: dot_accuracy@5
|
| 89 |
-
value: 0.
|
| 90 |
name: Dot Accuracy@5
|
| 91 |
- type: dot_accuracy@10
|
| 92 |
-
value: 0.
|
| 93 |
name: Dot Accuracy@10
|
| 94 |
- type: dot_precision@1
|
| 95 |
-
value: 0.
|
| 96 |
name: Dot Precision@1
|
| 97 |
- type: dot_precision@3
|
| 98 |
-
value: 0.
|
| 99 |
name: Dot Precision@3
|
| 100 |
- type: dot_precision@5
|
| 101 |
-
value: 0.
|
| 102 |
name: Dot Precision@5
|
| 103 |
- type: dot_precision@10
|
| 104 |
-
value: 0.
|
| 105 |
name: Dot Precision@10
|
| 106 |
- type: dot_recall@1
|
| 107 |
-
value: 0.
|
| 108 |
name: Dot Recall@1
|
| 109 |
- type: dot_recall@3
|
| 110 |
-
value: 0.
|
| 111 |
name: Dot Recall@3
|
| 112 |
- type: dot_recall@5
|
| 113 |
-
value: 0.
|
| 114 |
name: Dot Recall@5
|
| 115 |
- type: dot_recall@10
|
| 116 |
-
value: 0.
|
| 117 |
name: Dot Recall@10
|
| 118 |
- type: dot_ndcg@10
|
| 119 |
-
value: 0.
|
| 120 |
name: Dot Ndcg@10
|
| 121 |
- type: dot_mrr@10
|
| 122 |
-
value: 0.
|
| 123 |
name: Dot Mrr@10
|
| 124 |
- type: dot_map@100
|
| 125 |
-
value: 0.
|
| 126 |
name: Dot Map@100
|
| 127 |
- type: query_active_dims
|
| 128 |
-
value:
|
| 129 |
name: Query Active Dims
|
| 130 |
- type: query_sparsity_ratio
|
| 131 |
-
value: 0.
|
| 132 |
name: Query Sparsity Ratio
|
| 133 |
- type: corpus_active_dims
|
| 134 |
-
value:
|
| 135 |
name: Corpus Active Dims
|
| 136 |
- type: corpus_sparsity_ratio
|
| 137 |
-
value: 0.
|
| 138 |
name: Corpus Sparsity Ratio
|
| 139 |
---
|
| 140 |
|
| 141 |
-
# SPLADE-BERT-
|
| 142 |
|
| 143 |
-
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-
|
| 144 |
## Model Details
|
| 145 |
|
| 146 |
### Model Description
|
| 147 |
- **Model Type:** SPLADE Sparse Encoder
|
| 148 |
-
- **Base model:** [prajjwal1/bert-
|
| 149 |
- **Maximum Sequence Length:** 512 tokens
|
| 150 |
- **Output Dimensionality:** 30522 dimensions
|
| 151 |
- **Similarity Function:** Dot Product
|
|
@@ -184,15 +174,15 @@ Then you can load this model and run inference.
|
|
| 184 |
from sentence_transformers import SparseEncoder
|
| 185 |
|
| 186 |
# Download from the 🤗 Hub
|
| 187 |
-
model = SparseEncoder("yosefw/SPLADE-BERT-
|
| 188 |
# Run inference
|
| 189 |
queries = [
|
| 190 |
-
"
|
| 191 |
]
|
| 192 |
documents = [
|
| 193 |
-
|
| 194 |
-
'
|
| 195 |
-
'
|
| 196 |
]
|
| 197 |
query_embeddings = model.encode_query(queries)
|
| 198 |
document_embeddings = model.encode_document(documents)
|
|
@@ -202,7 +192,7 @@ print(query_embeddings.shape, document_embeddings.shape)
|
|
| 202 |
# Get the similarity scores for the embeddings
|
| 203 |
similarities = model.similarity(query_embeddings, document_embeddings)
|
| 204 |
print(similarities)
|
| 205 |
-
# tensor([[
|
| 206 |
```
|
| 207 |
|
| 208 |
<!--
|
|
@@ -239,25 +229,25 @@ You can finetune this model on your own dataset.
|
|
| 239 |
|
| 240 |
| Metric | Value |
|
| 241 |
|:----------------------|:-----------|
|
| 242 |
-
| dot_accuracy@1 | 0.
|
| 243 |
-
| dot_accuracy@3 | 0.
|
| 244 |
-
| dot_accuracy@5 | 0.
|
| 245 |
-
| dot_accuracy@10 | 0.
|
| 246 |
-
| dot_precision@1 | 0.
|
| 247 |
-
| dot_precision@3 | 0.
|
| 248 |
-
| dot_precision@5 | 0.
|
| 249 |
-
| dot_precision@10 | 0.
|
| 250 |
-
| dot_recall@1 | 0.
|
| 251 |
-
| dot_recall@3 | 0.
|
| 252 |
-
| dot_recall@5 | 0.
|
| 253 |
-
| dot_recall@10 | 0.
|
| 254 |
-
| **dot_ndcg@10** | **0.
|
| 255 |
-
| dot_mrr@10 | 0.
|
| 256 |
-
| dot_map@100 | 0.
|
| 257 |
-
| query_active_dims |
|
| 258 |
-
| query_sparsity_ratio | 0.
|
| 259 |
-
| corpus_active_dims |
|
| 260 |
-
| corpus_sparsity_ratio | 0.
|
| 261 |
|
| 262 |
<!--
|
| 263 |
## Bias, Risks and Limitations
|
|
@@ -277,19 +267,19 @@ You can finetune this model on your own dataset.
|
|
| 277 |
|
| 278 |
#### Unnamed Dataset
|
| 279 |
|
| 280 |
-
* Size:
|
| 281 |
* Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>label</code>
|
| 282 |
* Approximate statistics based on the first 1000 samples:
|
| 283 |
-
| | query | positive | negative_1 | negative_2
|
| 284 |
-
|
| 285 |
-
| type | string | string | string | string
|
| 286 |
-
| details | <ul><li>min: 4 tokens</li><li>mean:
|
| 287 |
* Samples:
|
| 288 |
-
| query
|
| 289 |
-
|
| 290 |
-
| <code>
|
| 291 |
-
| <code>
|
| 292 |
-
| <code>
|
| 293 |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
|
| 294 |
```json
|
| 295 |
{
|
|
@@ -439,12 +429,12 @@ You can finetune this model on your own dataset.
|
|
| 439 |
### Training Logs
|
| 440 |
| Epoch | Step | Training Loss | dot_ndcg@10 |
|
| 441 |
|:-------:|:---------:|:-------------:|:-----------:|
|
| 442 |
-
| 1.0 |
|
| 443 |
-
| 2.0 |
|
| 444 |
-
| 3.0 |
|
| 445 |
-
| 4.0 |
|
| 446 |
-
| 5.0 |
|
| 447 |
-
| **6.0** | **
|
| 448 |
|
| 449 |
* The bold row denotes the saved checkpoint.
|
| 450 |
|
|
|
|
| 8 |
- sparse
|
| 9 |
- splade
|
| 10 |
- generated_from_trainer
|
| 11 |
+
- dataset_size:250000
|
| 12 |
- loss:SpladeLoss
|
| 13 |
- loss:SparseMarginMSELoss
|
| 14 |
- loss:FlopsLoss
|
| 15 |
+
base_model: prajjwal1/bert-mini
|
| 16 |
widget:
|
| 17 |
+
- text: what did marlo thomas play on
|
| 18 |
+
- text: 'Unused vacation does not roll over. to next calendar year and is not paid
|
| 19 |
+
out at termination. Please Note: This table applies to employees in positions
|
| 20 |
+
with 100% FTE. The number of hours/days of vacation are pro-rated for FTEs between
|
| 21 |
+
75%. and 99%. For Example: If a non-exempt employee who is within their first
|
| 22 |
+
five years of service has an FTE of 75%, then the number of hours they would.
|
| 23 |
+
accrue each month would be 6 (8 x .75 = 6) and not 8.'
|
| 24 |
+
- text: 'To convert from miles to feet by hand, multiply miles by 5280. miles * 5280
|
| 25 |
+
= feet. To convert from feet to miles by hand, divide feet by 5280. feet / 5280
|
| 26 |
+
= miles. An automated version of this calculator can be found here:'
|
| 27 |
+
- text: All you have to do is click on the button directly below and follow the instructions.
|
| 28 |
+
Click he relevant payment button below for £55 payment. --------------------------.
|
| 29 |
+
Star Attuned Crystals for Activation. Another way to receive star energies and
|
| 30 |
+
activations is to buy the attuned crystals that I provide.These carry the energies
|
| 31 |
+
of specific stars or star beings and guides. By holding these you can feel the
|
| 32 |
+
energies of the star or star being and this brings healing, activation, spiritual
|
| 33 |
+
growth and sometimes communication.he highest form of star energy work / activation
|
| 34 |
+
is to receive a Star Attunement. Star Attunements carry the power of stars and
|
| 35 |
+
evolved star beings. They are off the scale and profoundly beautiful and spiritual.
|
| 36 |
+
- text: Fermentation is a metabolic pathway that produce ATP molecules under anaerobic
|
| 37 |
+
conditions (only undergoes glycolysis), NAD+ is used directly in glycolysis to
|
| 38 |
+
form ATP molecules, which is not as efficient as cellular respiration because
|
| 39 |
+
only 2ATP molecules are formed during the glycolysis.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
pipeline_tag: feature-extraction
|
| 41 |
library_name: sentence-transformers
|
| 42 |
metrics:
|
|
|
|
| 60 |
- corpus_active_dims
|
| 61 |
- corpus_sparsity_ratio
|
| 62 |
model-index:
|
| 63 |
+
- name: SPLADE-BERT-Mini-Distil
|
| 64 |
results:
|
| 65 |
- task:
|
| 66 |
type: sparse-information-retrieval
|
|
|
|
| 70 |
type: unknown
|
| 71 |
metrics:
|
| 72 |
- type: dot_accuracy@1
|
| 73 |
+
value: 0.4828
|
| 74 |
name: Dot Accuracy@1
|
| 75 |
- type: dot_accuracy@3
|
| 76 |
+
value: 0.8052
|
| 77 |
name: Dot Accuracy@3
|
| 78 |
- type: dot_accuracy@5
|
| 79 |
+
value: 0.9046
|
| 80 |
name: Dot Accuracy@5
|
| 81 |
- type: dot_accuracy@10
|
| 82 |
+
value: 0.9666
|
| 83 |
name: Dot Accuracy@10
|
| 84 |
- type: dot_precision@1
|
| 85 |
+
value: 0.4828
|
| 86 |
name: Dot Precision@1
|
| 87 |
- type: dot_precision@3
|
| 88 |
+
value: 0.27566666666666667
|
| 89 |
name: Dot Precision@3
|
| 90 |
- type: dot_precision@5
|
| 91 |
+
value: 0.18787999999999996
|
| 92 |
name: Dot Precision@5
|
| 93 |
- type: dot_precision@10
|
| 94 |
+
value: 0.10156
|
| 95 |
name: Dot Precision@10
|
| 96 |
- type: dot_recall@1
|
| 97 |
+
value: 0.4673
|
| 98 |
name: Dot Recall@1
|
| 99 |
- type: dot_recall@3
|
| 100 |
+
value: 0.792
|
| 101 |
name: Dot Recall@3
|
| 102 |
- type: dot_recall@5
|
| 103 |
+
value: 0.8949
|
| 104 |
name: Dot Recall@5
|
| 105 |
- type: dot_recall@10
|
| 106 |
+
value: 0.9624166666666668
|
| 107 |
name: Dot Recall@10
|
| 108 |
- type: dot_ndcg@10
|
| 109 |
+
value: 0.7302009825334612
|
| 110 |
name: Dot Ndcg@10
|
| 111 |
- type: dot_mrr@10
|
| 112 |
+
value: 0.6579904761904781
|
| 113 |
name: Dot Mrr@10
|
| 114 |
- type: dot_map@100
|
| 115 |
+
value: 0.6534502206938125
|
| 116 |
name: Dot Map@100
|
| 117 |
- type: query_active_dims
|
| 118 |
+
value: 19.52400016784668
|
| 119 |
name: Query Active Dims
|
| 120 |
- type: query_sparsity_ratio
|
| 121 |
+
value: 0.9993603302480883
|
| 122 |
name: Query Sparsity Ratio
|
| 123 |
- type: corpus_active_dims
|
| 124 |
+
value: 113.4705113862854
|
| 125 |
name: Corpus Active Dims
|
| 126 |
- type: corpus_sparsity_ratio
|
| 127 |
+
value: 0.9962823369573983
|
| 128 |
name: Corpus Sparsity Ratio
|
| 129 |
---
|
| 130 |
|
| 131 |
+
# SPLADE-BERT-Mini-Distil
|
| 132 |
|
| 133 |
+
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
|
| 134 |
## Model Details
|
| 135 |
|
| 136 |
### Model Description
|
| 137 |
- **Model Type:** SPLADE Sparse Encoder
|
| 138 |
+
- **Base model:** [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini) <!-- at revision 5e123abc2480f0c4b4cac186d3b3f09299c258fc -->
|
| 139 |
- **Maximum Sequence Length:** 512 tokens
|
| 140 |
- **Output Dimensionality:** 30522 dimensions
|
| 141 |
- **Similarity Function:** Dot Product
|
|
|
|
| 174 |
from sentence_transformers import SparseEncoder
|
| 175 |
|
| 176 |
# Download from the 🤗 Hub
|
| 177 |
+
model = SparseEncoder("yosefw/SPLADE-BERT-Mini-distil-v2")
|
| 178 |
# Run inference
|
| 179 |
queries = [
|
| 180 |
+
"definition of fermentation in the lab",
|
| 181 |
]
|
| 182 |
documents = [
|
| 183 |
+
'Fermentation is a metabolic pathway that produce ATP molecules under anaerobic conditions (only undergoes glycolysis), NAD+ is used directly in glycolysis to form ATP molecules, which is not as efficient as cellular respiration because only 2ATP molecules are formed during the glycolysis.',
|
| 184 |
+
'Essay on Yeast Fermentation ... Yeast Fermentation Lab Report The purpose of this experiment was to observe the process in which cells must partake in a respiration process called anaerobic fermentation and as the name suggests, oxygen is not required.',
|
| 185 |
+
'\ufeffYeast Fermentation Lab Report The purpose of this experiment was to observe the process in which cells must partake in a respiration process called anaerobic fermentation and as the name suggests, oxygen is not required.',
|
| 186 |
]
|
| 187 |
query_embeddings = model.encode_query(queries)
|
| 188 |
document_embeddings = model.encode_document(documents)
|
|
|
|
| 192 |
# Get the similarity scores for the embeddings
|
| 193 |
similarities = model.similarity(query_embeddings, document_embeddings)
|
| 194 |
print(similarities)
|
| 195 |
+
# tensor([[20.0220, 17.1372, 15.9159]])
|
| 196 |
```
|
| 197 |
|
| 198 |
<!--
|
|
|
|
| 229 |
|
| 230 |
| Metric | Value |
|
| 231 |
|:----------------------|:-----------|
|
| 232 |
+
| dot_accuracy@1 | 0.4828 |
|
| 233 |
+
| dot_accuracy@3 | 0.8052 |
|
| 234 |
+
| dot_accuracy@5 | 0.9046 |
|
| 235 |
+
| dot_accuracy@10 | 0.9666 |
|
| 236 |
+
| dot_precision@1 | 0.4828 |
|
| 237 |
+
| dot_precision@3 | 0.2757 |
|
| 238 |
+
| dot_precision@5 | 0.1879 |
|
| 239 |
+
| dot_precision@10 | 0.1016 |
|
| 240 |
+
| dot_recall@1 | 0.4673 |
|
| 241 |
+
| dot_recall@3 | 0.792 |
|
| 242 |
+
| dot_recall@5 | 0.8949 |
|
| 243 |
+
| dot_recall@10 | 0.9624 |
|
| 244 |
+
| **dot_ndcg@10** | **0.7302** |
|
| 245 |
+
| dot_mrr@10 | 0.658 |
|
| 246 |
+
| dot_map@100 | 0.6535 |
|
| 247 |
+
| query_active_dims | 19.524 |
|
| 248 |
+
| query_sparsity_ratio | 0.9994 |
|
| 249 |
+
| corpus_active_dims | 113.4705 |
|
| 250 |
+
| corpus_sparsity_ratio | 0.9963 |
|
| 251 |
|
| 252 |
<!--
|
| 253 |
## Bias, Risks and Limitations
|
|
|
|
| 267 |
|
| 268 |
#### Unnamed Dataset
|
| 269 |
|
| 270 |
+
* Size: 250,000 training samples
|
| 271 |
* Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>label</code>
|
| 272 |
* Approximate statistics based on the first 1000 samples:
|
| 273 |
+
| | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
|
| 274 |
+
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
|
| 275 |
+
| type | string | string | string | string | string | string | list |
|
| 276 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 8.87 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 81.23 tokens</li><li>max: 259 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 79.21 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 77.89 tokens</li><li>max: 207 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 76.38 tokens</li><li>max: 271 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 75.46 tokens</li><li>max: 214 tokens</li></ul> | <ul><li>size: 4 elements</li></ul> |
|
| 277 |
* Samples:
|
| 278 |
+
| query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
|
| 279 |
+
|:------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|
|
| 280 |
+
| <code>heart specialists in ridgeland ms</code> | <code>Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine.</code> | <code>Dr. James Kramer is a Internist in Ridgeland, MS. Find Dr. Kramer's phone number, address and more.</code> | <code>Dr. James Kramer is an internist in Ridgeland, Mississippi. He received his medical degree from Loma Linda University School of Medicine and has been in practice for more than 20 years. Dr. James Kramer's Details</code> | <code>Chronic Pulmonary Heart Diseases (incl. Pulmonary Hypertension) Coarctation of the Aorta; Congenital Aortic Valve Disorders; Congenital Heart Defects; Congenital Heart Disease; Congestive Heart Failure; Coronary Artery Disease (CAD) Endocarditis; Heart Attack (Acute Myocardial Infarction) Heart Disease; Heart Murmur; Heart Palpitations; Hyperlipidemia; Hypertension</code> | <code>A growing shortage of primary care doctors means you might have to look harder for ongoing care. How to Read an OTC Medication Label Purvi Parikh, M.D. | Feb. 12, 2018</code> | <code>[6.058592796325684, 6.587987422943115, 19.88274383544922, 20.211898803710938]</code> |
|
| 281 |
+
| <code>does baytril otic require a prescription</code> | <code>Baytril Otic Ear Drops-Enrofloxacin/Silver Sulfadiazine-Prices & Information. A prescription is required for this item. A prescription is required for this item. Brand medication is not available at this time.</code> | <code>RX required for this item. Click here for our full Prescription Policy and Form. Baytril Otic (enrofloxacin/silver sulfadiazine) Emulsion from Bayer is the first fluoroquinolone approved by the Food and Drug Administration for the topical treatment of canine otitis externa.</code> | <code>Product Details. Baytril Otic is a highly effective treatment prescribed by many veterinarians when your pet has an ear infection caused by susceptible bacteria or fungus. Baytril Otic is: a liquid emulsion that is used topically directly in the ear or on the skin in order to treat susceptible bacterial and yeast infections.</code> | <code>Baytril for dogs is an antibiotic often prescribed for bacterial infections, particularly those involving the ears. Ear infections are rare in many animals, but quite common in dogs. This is particularly true for dogs with long droopy ears, where it will stay very warm and moist.</code> | <code>Administer 5-10 Baytril ear drops per treatment in dogs 35 lbs or less and 10-15 drops per treatment in dogs more than 35 lbs.</code> | <code>[1.0, 3.640146493911743, 6.450072288513184, 11.96937084197998]</code> |
|
| 282 |
+
| <code>what is on a gyro</code> | <code>Report Abuse. Gyros or gyro (giros) (pronounced /ˈjɪəroʊ/ or /ˈdʒaɪroʊ/, Greek: γύρος turn) is a Greek dish consisting of meat (typically lamb and/or beef), tomato, onion, and tzatziki sauce, and is served with pita bread. Chicken and pork meat can be used too.</code> | <code>A gyroscope (from Ancient Greek γῦρος gûros, circle and σκοπέω skopéō, to look) is a spinning wheel or disc in which the axis of rotation is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum.</code> | <code>Diagram of a gyro wheel. Reaction arrows about the output axis (blue) correspond to forces applied about the input axis (green), and vice versa. A gyroscope is a wheel mounted in two or three gimbals, which are a pivoted supports that allow the rotation of the wheel about a single axis.</code> | <code>A fair number of our users are unsure of how to pronounce gyro. This isn't surprising, since there are two different gyros and they have two different pronunciations. The earlier gyro is the one that is a shortened form of gyrocompass or gyroscope, and it has a pronunciation that conforms to one's expectations: /JEYE-roh/.</code> | <code>Vibration Gyro Sensors. Vibration gyro sensors sense angular velocity from the Coriolis force applied to a vibrating element. For this reason, the accuracy with which angular velocity is measured differs significantly depending on element material and structural differences.</code> | <code>[2.1750364303588867, 2.634796142578125, 4.30520486831665, 6.382436752319336]</code> |
|
| 283 |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
|
| 284 |
```json
|
| 285 |
{
|
|
|
|
| 429 |
### Training Logs
|
| 430 |
| Epoch | Step | Training Loss | dot_ndcg@10 |
|
| 431 |
|:-------:|:---------:|:-------------:|:-----------:|
|
| 432 |
+
| 1.0 | 5209 | 30541.8683 | 0.6969 |
|
| 433 |
+
| 2.0 | 10418 | 13.3966 | 0.7167 |
|
| 434 |
+
| 3.0 | 15627 | 11.6531 | 0.7262 |
|
| 435 |
+
| 4.0 | 20836 | 9.9781 | 0.7280 |
|
| 436 |
+
| 5.0 | 26045 | 8.881 | 0.7289 |
|
| 437 |
+
| **6.0** | **31254** | **8.3454** | **0.7302** |
|
| 438 |
|
| 439 |
* The bold row denotes the saved checkpoint.
|
| 440 |
|
config.json
CHANGED
|
@@ -6,14 +6,14 @@
|
|
| 6 |
"classifier_dropout": null,
|
| 7 |
"hidden_act": "gelu",
|
| 8 |
"hidden_dropout_prob": 0.1,
|
| 9 |
-
"hidden_size":
|
| 10 |
"initializer_range": 0.02,
|
| 11 |
-
"intermediate_size":
|
| 12 |
"layer_norm_eps": 1e-12,
|
| 13 |
"max_position_embeddings": 512,
|
| 14 |
"model_type": "bert",
|
| 15 |
-
"num_attention_heads":
|
| 16 |
-
"num_hidden_layers":
|
| 17 |
"pad_token_id": 0,
|
| 18 |
"position_embedding_type": "absolute",
|
| 19 |
"torch_dtype": "float32",
|
|
|
|
| 6 |
"classifier_dropout": null,
|
| 7 |
"hidden_act": "gelu",
|
| 8 |
"hidden_dropout_prob": 0.1,
|
| 9 |
+
"hidden_size": 256,
|
| 10 |
"initializer_range": 0.02,
|
| 11 |
+
"intermediate_size": 1024,
|
| 12 |
"layer_norm_eps": 1e-12,
|
| 13 |
"max_position_embeddings": 512,
|
| 14 |
"model_type": "bert",
|
| 15 |
+
"num_attention_heads": 4,
|
| 16 |
+
"num_hidden_layers": 4,
|
| 17 |
"pad_token_id": 0,
|
| 18 |
"position_embedding_type": "absolute",
|
| 19 |
"torch_dtype": "float32",
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eca5ef6ed2e950b988214239e64f87b79f40b939a74ad47b6994ffa4b5de2c25
|
| 3 |
+
size 44814856
|