veton-berisha commited on
Commit
2cedd16
·
verified ·
1 Parent(s): b105b5b

mse=0.0240

Browse files
README.md CHANGED
@@ -8,31 +8,36 @@ tags:
8
  - loss:CosineSimilarityLoss
9
  base_model: sentence-transformers/all-mpnet-base-v2
10
  widget:
11
- - source_sentence: Build type system for programming language from scratch
 
12
  sentences:
13
- - Uses TypeScript for type-safe JavaScript
14
- - Led architecture decision meetings resulting in consensus
15
- - Integrated Stripe, PayPal, and custom payment solutions
16
- - source_sentence: Privacy engineering skills
 
17
  sentences:
18
- - Implemented differential privacy
19
- - Technical implementation without vendor management
20
- - Created developer-friendly APIs with Swagger docs
21
- - source_sentence: Privacy Pass, privacy protocol
 
 
22
  sentences:
23
- - Modern development tools only
24
- - Excellent at breaking down complex topics for junior developers
25
- - Privacy-preserving authentication methods
26
- - source_sentence: JVM tuning and profiling
 
27
  sentences:
28
- - Performance monitoring patterns
29
- - Optimized GC settings reducing pause times
30
- - Senior developer with proven track record debugging distributed system race conditions
31
- - source_sentence: Knowledge sharing enthusiasm
32
  sentences:
33
- - Regular meetup speaker and blogger
34
- - Optimized Spark jobs processing terabytes of data daily
35
- - Configured database partitioning
36
  pipeline_tag: sentence-similarity
37
  library_name: sentence-transformers
38
  metrics:
@@ -49,10 +54,10 @@ model-index:
49
  type: val
50
  metrics:
51
  - type: pearson_cosine
52
- value: 0.8977247913414342
53
  name: Pearson Cosine
54
  - type: spearman_cosine
55
- value: 0.8052388814564073
56
  name: Spearman Cosine
57
  ---
58
 
@@ -105,9 +110,9 @@ from sentence_transformers import SentenceTransformer
105
  model = SentenceTransformer("sentence_transformers_model_id")
106
  # Run inference
107
  sentences = [
108
- 'Knowledge sharing enthusiasm',
109
- 'Regular meetup speaker and blogger',
110
- 'Configured database partitioning',
111
  ]
112
  embeddings = model.encode(sentences)
113
  print(embeddings.shape)
@@ -154,8 +159,8 @@ You can finetune this model on your own dataset.
154
 
155
  | Metric | Value |
156
  |:--------------------|:-----------|
157
- | pearson_cosine | 0.8977 |
158
- | **spearman_cosine** | **0.8052** |
159
 
160
  <!--
161
  ## Bias, Risks and Limitations
@@ -181,13 +186,13 @@ You can finetune this model on your own dataset.
181
  | | sentence_0 | sentence_1 | label |
182
  |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
183
  | type | string | string | float |
184
- | details | <ul><li>min: 4 tokens</li><li>mean: 9.74 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.06 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.67</li><li>max: 1.0</li></ul> |
185
  * Samples:
186
- | sentence_0 | sentence_1 | label |
187
- |:---------------------------------------------------------------------------|:---------------------------------------------------------------------------|:-----------------|
188
- | <code>Boundary-value testing and equivalence partitioning expertise</code> | <code>QA engineer designing test cases with boundary value analysis</code> | <code>0.9</code> |
189
- | <code>Must have strong decision-making skills</code> | <code>Makes timely decisions based on available information</code> | <code>0.7</code> |
190
- | <code>8+ years building real-time collaboration tools</code> | <code>Traditional request-response application development</code> | <code>0.2</code> |
191
  * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
192
  ```json
193
  {
@@ -326,17 +331,17 @@ You can finetune this model on your own dataset.
326
  ### Training Logs
327
  | Epoch | Step | val_spearman_cosine |
328
  |:------:|:----:|:-------------------:|
329
- | 0.5208 | 50 | 0.6737 |
330
- | 1.0 | 96 | 0.7384 |
331
- | 1.0417 | 100 | 0.7431 |
332
- | 1.5625 | 150 | 0.7703 |
333
- | 2.0 | 192 | 0.7790 |
334
- | 2.0833 | 200 | 0.7817 |
335
- | 2.6042 | 250 | 0.8011 |
336
- | 3.0 | 288 | 0.7967 |
337
- | 3.125 | 300 | 0.7963 |
338
- | 3.6458 | 350 | 0.8046 |
339
- | 4.0 | 384 | 0.8052 |
340
 
341
 
342
  ### Framework Versions
 
8
  - loss:CosineSimilarityLoss
9
  base_model: sentence-transformers/all-mpnet-base-v2
10
  widget:
11
+ - source_sentence: Frontend performance optimization including lazy loading, code
12
+ splitting, caching
13
  sentences:
14
+ - Optimized React applications with code splitting reducing initial load by 60%
15
+ - Implemented mutation testing successfully
16
+ - Designed event-driven architecture using RabbitMQ with dead letter queues
17
+ - source_sentence: Git version control proficiency with branching strategies and pull
18
+ request workflows
19
  sentences:
20
+ - Daily Git user, implemented GitFlow branching model, reviewed hundreds of pull
21
+ requests
22
+ - PWA manifest configuration expertise
23
+ - Used ExecutorService and CompletableFuture effectively
24
+ - source_sentence: Self-motivation to stay current with industry trends and emerging
25
+ technologies
26
  sentences:
27
+ - Java developer using CompletableFuture and streams for concurrent programming
28
+ - Pinecone, Weaviate vector databases
29
+ - Completed 5 online certifications last year and contributed to open-source projects
30
+ - source_sentence: Conflict resolution skills in technical discussions and architecture
31
+ decisions
32
  sentences:
33
+ - Comprehensive API testing with Postman/Newman
34
+ - Facilitates productive technical debates leading to consensus on design choices
35
+ - Monitored service health with alerts
36
+ - source_sentence: Origin Rules, backend config
37
  sentences:
38
+ - Origin server configuration patterns
39
+ - Functional programmer using F# for financial domain modeling
40
+ - Content writer with blog experience
41
  pipeline_tag: sentence-similarity
42
  library_name: sentence-transformers
43
  metrics:
 
54
  type: val
55
  metrics:
56
  - type: pearson_cosine
57
+ value: 0.8944877836968456
58
  name: Pearson Cosine
59
  - type: spearman_cosine
60
+ value: 0.8039152046120273
61
  name: Spearman Cosine
62
  ---
63
 
 
110
  model = SentenceTransformer("sentence_transformers_model_id")
111
  # Run inference
112
  sentences = [
113
+ 'Origin Rules, backend config',
114
+ 'Origin server configuration patterns',
115
+ 'Content writer with blog experience',
116
  ]
117
  embeddings = model.encode(sentences)
118
  print(embeddings.shape)
 
159
 
160
  | Metric | Value |
161
  |:--------------------|:-----------|
162
+ | pearson_cosine | 0.8945 |
163
+ | **spearman_cosine** | **0.8039** |
164
 
165
  <!--
166
  ## Bias, Risks and Limitations
 
186
  | | sentence_0 | sentence_1 | label |
187
  |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
188
  | type | string | string | float |
189
+ | details | <ul><li>min: 4 tokens</li><li>mean: 9.86 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.97 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.64</li><li>max: 1.0</li></ul> |
190
  * Samples:
191
+ | sentence_0 | sentence_1 | label |
192
+ |:--------------------------------------------------------------|:------------------------------------------------------------------|:-----------------|
193
+ | <code>Performance testing tools</code> | <code>Consistent Lighthouse score improvements</code> | <code>0.9</code> |
194
+ | <code>Responsibility never shirked</code> | <code>Never irresponsible, always accountable, duty keeper</code> | <code>0.9</code> |
195
+ | <code>Experience with distributed consensus algorithms</code> | <code>Academic researcher in distributed systems theory</code> | <code>0.4</code> |
196
  * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
197
  ```json
198
  {
 
331
  ### Training Logs
332
  | Epoch | Step | val_spearman_cosine |
333
  |:------:|:----:|:-------------------:|
334
+ | 0.5208 | 50 | 0.6705 |
335
+ | 1.0 | 96 | 0.7258 |
336
+ | 1.0417 | 100 | 0.7347 |
337
+ | 1.5625 | 150 | 0.7621 |
338
+ | 2.0 | 192 | 0.7815 |
339
+ | 2.0833 | 200 | 0.7823 |
340
+ | 2.6042 | 250 | 0.7885 |
341
+ | 3.0 | 288 | 0.8023 |
342
+ | 3.125 | 300 | 0.8012 |
343
+ | 3.6458 | 350 | 0.8035 |
344
+ | 4.0 | 384 | 0.8039 |
345
 
346
 
347
  ### Framework Versions
eval/similarity_evaluation_val_results.csv CHANGED
@@ -1,5 +1,5 @@
1
  epoch,steps,cosine_pearson,cosine_spearman
2
- 1.0,96,0.8272483418053012,0.7384040919120075
3
- 2.0,192,0.8806144722889805,0.7789630856263889
4
- 3.0,288,0.8940053252264049,0.7967165513263559
5
- 4.0,384,0.8977247913414342,0.8052388814564073
 
1
  epoch,steps,cosine_pearson,cosine_spearman
2
+ 1.0,96,0.8246132386236589,0.7258432278825692
3
+ 2.0,192,0.8747130077761142,0.7814918161144916
4
+ 3.0,288,0.8917155059609527,0.8023055594486815
5
+ 4.0,384,0.8944877836968456,0.8039152046120273
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:628af632e016d61b250d100cdf4a3b0b13f3c1b2802767ceea7fd31e83f3ebfa
3
  size 437967672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3027997d304a9a29611ddd1da76cb28a434325115857204697bd012b7115bb23
3
  size 437967672