omarelsayeed commited on
Commit
7ffdf59
·
1 Parent(s): 3887bae

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +4 -4
  2. pytorch_model.bin +1 -1
  3. sentence_bert_config.json +1 -1
  4. tokenizer.json +2 -2
README.md CHANGED
@@ -85,9 +85,9 @@ The model was trained with the parameters:
85
 
86
  **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 17 with parameters:
89
  ```
90
- {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
@@ -97,7 +97,7 @@ The model was trained with the parameters:
97
  Parameters of the fit()-Method:
98
  ```
99
  {
100
- "epochs": 3,
101
  "evaluation_steps": 0,
102
  "evaluator": "NoneType",
103
  "max_grad_norm": 1,
@@ -116,7 +116,7 @@ Parameters of the fit()-Method:
116
  ## Full Model Architecture
117
  ```
118
  SentenceTransformer(
119
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
120
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
  )
122
  ```
 
85
 
86
  **DataLoader**:
87
 
88
+ `torch.utils.data.dataloader.DataLoader` of length 28 with parameters:
89
  ```
90
+ {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
 
97
  Parameters of the fit()-Method:
98
  ```
99
  {
100
+ "epochs": 15,
101
  "evaluation_steps": 0,
102
  "evaluator": "NoneType",
103
  "max_grad_norm": 1,
 
116
  ## Full Model Architecture
117
  ```
118
  SentenceTransformer(
119
+ (0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: BertModel
120
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
  )
122
  ```
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eff90c1540455aab4cc086aaa4e00221a4aeeefe73110befecc2b09a5c0e21f7
3
  size 46223689
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:461e966badc0fe32ad09b76802f18f42c3181d19a33ffb70c8abd8a6f43bdf3a
3
  size 46223689
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 512,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 80,
3
  "do_lower_case": false
4
  }
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f189edd65d6661271d6f2e83efcd1f9bfec6243691a519003e27268c35a9174
3
- size 837545
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf64ea4b6f2156ee6ed9faf8cf10d83b17af75673821f7f418b619c046f2e25e
3
+ size 837544