ShushantLLM commited on
Commit
d493a02
·
verified ·
1 Parent(s): fd78736

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,904 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:810
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
11
+ widget:
12
+ - source_sentence: CBRN defense, predictive analytics, natural language understanding
13
+ sentences:
14
+ - "experience with speech interfaces Lead and evaluate changing dialog evaluation\
15
+ \ conventions, test tooling developments, and pilot processes to support expansion\
16
+ \ to new data areas Continuously evaluate workflow tools and processes and offer\
17
+ \ solutions to ensure they are efficient, high quality, and scalable Provide expert\
18
+ \ support for a large and growing team of data analysts Provide support for ongoing\
19
+ \ and new data collection efforts as a subject matter expert on conventions and\
20
+ \ use of the data Conduct research studies to understand speech and customer-Alexa\
21
+ \ interactions Assist scientists, program and product managers, and other stakeholders\
22
+ \ in defining and validating customer experience metrics\n\nWe are open to hiring\
23
+ \ candidates to work out of one of the following locations:\n\nBoston, MA, USA\
24
+ \ | Seattle, WA, USA\n\nBasic Qualifications\n\n 3+ years of data querying languages\
25
+ \ (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software\
26
+ \ (e.g. R, SAS, Matlab, etc.) experience 2+ years of data scientist experience\
27
+ \ Bachelor's degree Experience applying theoretical models in an applied environment\n\
28
+ \nPreferred Qualifications\n\n Experience in Python, Perl, or another scripting\
29
+ \ language Experience in a ML or data scientist role with a large technology company\
30
+ \ Master's degree in a quantitative field such as statistics, mathematics, data\
31
+ \ science, business analytics, economics, finance, engineering, or computer science\n\
32
+ \nAmazon is committed to a diverse and inclusive workplace. Amazon is \n\nOur\
33
+ \ compensation reflects the cost of labor across several US geographic markets.\
34
+ \ The base pay for this position ranges from $111,600/year in our lowest geographic\
35
+ \ market up to $212,800/year in our highest geographic market. Pay is based on\
36
+ \ a number of factors including market location and may vary depending on job-related\
37
+ \ knowledge, skills, and experience. Amazon is a total compensation company. Dependent\
38
+ \ on the position offered, equity, sign-on payments, and other forms of compensation\
39
+ \ may be provided as part of a total compensation package, in addition to a full\
40
+ \ range of medical, financial, and/or other benefits. For more information, please\
41
+ \ visit https://www.aboutamazon.com/workplace/employee-benefits. This position\
42
+ \ will remain posted until filled. Applicants should apply via our internal or\
43
+ \ external career site.\n\n\nCompany - Amazon.com Services LLC\n\nJob ID: A2610750"
44
+ - 'Skills: Your Expertise:
45
+
46
+ 5+ years in industry experience and a degree (Masters or PhD is a plus) in a quantitative
47
+ field (e.g., Statistics, Econometrics, Computer Science, Engineering, Mathematics,
48
+ Data Science, Operations Research).Expert communication and collaboration skills
49
+ with the ability to work effectively with internal teams in a cross-cultural and
50
+ cross-functional environment. Ability to conduct rigorous analysis and communicate
51
+ conclusions to both technical and non-technical audiencesExperience partnering
52
+ with internal teams to drive action and providing expertise and direction on analytics,
53
+ data science, experimental design, and measurementExperience in analysis of A|B
54
+ experiments and statistical data analysisExperience designing and building metrics,
55
+ from conception to building prototypes with data pipelinesStrong knowledge in
56
+ at least one programming language (Python or R) and in SQLAbility to drive data
57
+ strategies, with a central source of truth to impact business decisionsKnowledge
58
+ and experience in insurance industry - a plusKnowledge and experience in customer
59
+ experience measurement - a plus
60
+
61
+ Keywords:Education: Minimum: BS/BA in CS or related field (or self-taught/ equivalent
62
+ work experience) Preferred: MS/MA in CS or related field'
63
+ - "requirements of the program or company.\n\n Working across the globe, V2X builds\
64
+ \ smart solutions designed to integrate physical and digital infrastructure from\
65
+ \ base to battlefield. We bring 120 years of successful mission support to improve\
66
+ \ security, streamline logistics, and enhance readiness. Aligned around a shared\
67
+ \ purpose, our $3.9B company and 16,000 people work alongside our clients, here\
68
+ \ and abroad, to tackle their most complex challenges with integrity, respect,\
69
+ \ responsibility, and professionalism. \n\nAt V2X, we are making a difference\
70
+ \ by delivering decision support tools critical for the protection of our forces\
71
+ \ when threatened by both physical and Chemical, Biological, Radiological, or\
72
+ \ Nuclear (CBRN) threats.\n\nWe are expanding in data science to provide the best\
73
+ \ information possible utilizing the latest techniques in Machine Learning (including\
74
+ \ Deep Learning, Neural network). We are on the forefront of CBRN defense and\
75
+ \ we are looking for talented Data Scientists that have applied experience in\
76
+ \ the fields of artificial intelligence, machine learning and/or natural language\
77
+ \ processing to join our team. Our data scientists work closely everyday with\
78
+ \ project managers, subject matter experts and software engineers to work on challenges\
79
+ \ in machine intelligence, data mining, and machine learning, and work together\
80
+ \ with agility to build capabilities that impress our customers. We partner and\
81
+ \ collaborate with universities to being best minds together.\n\nData scientists\
82
+ \ will have opportunities to work on projects with highest priority to our business.\
83
+ \ Vital to success in this role is the ability to determine, define and deploy\
84
+ \ predictive / prescriptive analytic solutions to identify and perform root cause\
85
+ \ analysis on adverse trends, by choosing best fit methods, defining algorithms,\
86
+ \ and validating and deploying models to achieve results.\n\nResponsibilities\n\
87
+ \nMajor Job Activities:\n\n Partner with our development teams to solve problems\
88
+ \ and identify trends and opportunities to leverage data from multiple sources.\
89
+ \ Collaborate across multiple teams. Passionate about working with large and\
90
+ \ complex unstructured and structured data sets. Strong communication and interpersonal\
91
+ \ skills. You should be able to work across functions and effectively present,\
92
+ \ recommend and communicate a position by demonstrating its value and tradeoffs.\
93
+ \ Comfortable conducting design, algorithm, and code reviews. Able to self-direct\
94
+ \ and succeed with minimal guidance. \n\nMaterial & Equipment Directly Used:\n\
95
+ \nComputer, Phone, and basic office materials.\n\nWorking Environment:\n\n Function\
96
+ \ in an office environment in a stationary position approximately 50 percent of\
97
+ \ the time or more. Must be able to operate standard office equipment, such as\
98
+ \ a computer, copy machine, and printer. \n\nQualifications\n\nEducation / Certifications:\n\
99
+ \n Bachelor’s degree in a computer, engineering, or quantitative discipline (e.g.,\
100
+ \ statistics, operations research, bioinformatics, economics, computational biology,\
101
+ \ computer science, mathematics, physics, electrical engineering, industrial engineering).\
102
+ \ Master's or Ph.D. in a quantitative discipline preferred. \n\nClearance Requirement:\
103
+ \ \n\nMust have or be able to obtain an active U.S. DoD Secret (or higher) level\
104
+ \ Security Clearance.\n\nExperience / Skills:\n\n 5+ years of relevant work experience\
105
+ \ in data analysis or related field. (e.g., statistician, data analyst, data scientist).\
106
+ \ Programming experience in one or more of the following: R, MATLAB, C, C++,\
107
+ \ Java, Python, Scala Experience in Natural Language Understanding, Computer\
108
+ \ Vision, Machine Learning, Algorithmic Foundations of Optimization, Data Mining\
109
+ \ or Machine Intelligence (Artificial Intelligence). Experience with statistical\
110
+ \ software (e.g., R, Octave, Julia, MATLAB, pandas) and database languages (e.g.,\
111
+ \ SQL). Experience with machine learning related open source libraries including,\
112
+ \ but not limited to: Hadoop, Spark, SciKit-Learn, TensorFlow, etc. Contribution\
113
+ \ to research communities and/or efforts, including publishing papers at conferences.\
114
+ \ \n\nWe are committed to an inclusive and diverse workplace that values and supports\
115
+ \ the contributions of each individual. This commitment along with our common\
116
+ \ Vision and Values of Integrity, Respect, and Responsibility, allows us to leverage\
117
+ \ differences, encourage innovation and expand our success in the global marketplace.\
118
+ \ V2X is an Equal Opportunity /Affirmative Action Employer. All qualified applicants\
119
+ \ will receive consideration for employment without regard to race, color, religion,\
120
+ \ age, sex, national origin, protected veteran status or status as an individual\
121
+ \ with a disability."
122
+ - source_sentence: Senior Data Analyst Pricing, data product automation, pricing strategy
123
+ analysis
124
+ sentences:
125
+ - Skills You BringBachelor’s or Master’s Degree in a technology related field (e.g.
126
+ Engineering, Computer Science, etc.) required with 6+ years of experienceInformatica
127
+ Power CenterGood experience with ETL technologiesSnaplogicStrong SQLProven data
128
+ analysis skillsStrong data modeling skills doing either Dimensional or Data Vault
129
+ modelsBasic AWS Experience Proven ability to deal with ambiguity and work in fast
130
+ paced environmentExcellent interpersonal and communication skillsExcellent collaboration
131
+ skills to work with multiple teams in the organization
132
+ - "experience, an annualized transactional volume of $140 billion in 2023, and approximately\
133
+ \ 3,200 employees located in 12+ countries, Paysafe connects businesses and consumers\
134
+ \ across 260 payment types in over 40 currencies around the world. Delivered through\
135
+ \ an integrated platform, Paysafe solutions are geared toward mobile-initiated\
136
+ \ transactions, real-time analytics and the convergence between brick-and-mortar\
137
+ \ and online payments. Further information is available at www.paysafe.com.\n\n\
138
+ Are you ready to make an impact? Join our team that is inspired by a unified vision\
139
+ \ and propelled by passion.\n\nPosition Summary\n\nWe are looking for a dynamic\
140
+ \ and flexible, Senior Data Analyst, Pricing to support our global Sales and Product\
141
+ \ organizations with strategic planning, analysis, and commercial pricing efforts\
142
+ \ . As a Senior Data Analyst , you will be at the frontier of building our Pricing\
143
+ \ function to drive growth through data and AI-enabled capabilities. This opportunity\
144
+ \ is high visibility for someone hungry to drive the upward trajectory of our\
145
+ \ business and be able to contribute to their efforts in the role in our success.\n\
146
+ \nYou will partner with Product Managers to understand their commercial needs,\
147
+ \ then prioritize and work with a cross-functional team to deliver pricing strategies\
148
+ \ and analytics-based solutions to solve and execute them. Business outcomes will\
149
+ \ include sustainable growth in both revenues and gross profit.\n\nThis role is\
150
+ \ based in Jacksonville, Florida and offers a flexible hybrid work environment\
151
+ \ with 3 days in the office and 2 days working remote during the work week.\n\n\
152
+ Responsibilities\n\n Build data products that power the automation and effectiveness\
153
+ \ of our pricing function, driving better quality revenues from merchants and\
154
+ \ consumers. Partner closely with pricing stakeholders (e.g., Product, Sales,\
155
+ \ Marketing) to turn raw data into actionable insights. Help ask the right questions\
156
+ \ and find the answers. Dive into complex pricing and behavioral data sets, spot\
157
+ \ trends and make interpretations. Utilize modelling and data-mining skills to\
158
+ \ find new insights and opportunities. Turn findings into plans for new data\
159
+ \ products or visions for new merchant features. Partner across merchant Product,\
160
+ \ Sales, Marketing, Development and Finance to build alignment, engagement and\
161
+ \ excitement for new products, features and initiatives. Ensure data quality\
162
+ \ and integrity by following and enforcing data governance policies, including\
163
+ \ alignment on data language. \n\n Qualifications \n\n Bachelor’s degree in\
164
+ \ a related field of study (Computer Science, Statistics, Mathematics, Engineering,\
165
+ \ etc.) required. 5+ years of experience of in-depth data analysis role, required;\
166
+ \ preferably in pricing context with B2B & B2C in a digital environment. Proven\
167
+ \ ability to visualize data intuitively, cleanly and clearly in order to make\
168
+ \ important insights simplified. Experience across large and complex datasets,\
169
+ \ including customer behavior, and transactional data. Advanced in SQL and in\
170
+ \ Python, preferred. Experience structuring and analyzing A/B tests, elasticities\
171
+ \ and interdependencies, preferred. Excellent communication and presentation\
172
+ \ skills, with the ability to explain complex data insights to non-technical audiences.\
173
+ \ \n\n Life at Paysafe: \n\nOne network. One partnership. At Paysafe, this is\
174
+ \ not only our business model; this is our mindset when it comes to our team.\
175
+ \ Being a part of Paysafe means you’ll be one of over 3,200 members of a world-class\
176
+ \ team that drives our business to new heights every day and where we are committed\
177
+ \ to your personal and professional growth.\n\nOur culture values humility, high\
178
+ \ trust & autonomy, a desire for excellence and meeting commitments, strong team\
179
+ \ cohesion, a sense of urgency, a desire to learn, pragmatically pushing boundaries,\
180
+ \ and accomplishing goals that have a direct business impact.\n\n \n\nPaysafe\
181
+ \ provides equal employment opportunities to all employees, and applicants for\
182
+ \ employment, and prohibits discrimination of any type concerning ethnicity, religion,\
183
+ \ age, sex, national origin, disability status, sexual orientation, gender identity\
184
+ \ or expression, or any other protected characteristics. This policy applies to\
185
+ \ all terms and conditions of recruitment and employment. If you need any reasonable\
186
+ \ adjustments, please let us know. We will be happy to help and look forward to\
187
+ \ hearing from you."
188
+ - "Experience : 10 yearsLocation : RemoteDuration: Full TimeJob DetailsData Warehouse,\
189
+ \ ETL, Advanced SQL,Data Profiling, Source to Target Mapping,Business Requirement\
190
+ \ Document, FRS, Healthcare.Should be able to navigate the code - developer background\n\
191
+ \uFEFFThanks & Regard's\nMohd FurquanLead Technical RecruiterE-mail: furqan@msrcosmos.comDirect\
192
+ \ No: +1 925 313 8949LinkedIn-ID :linkedin.com/in/mohd-furquan-94237816aVisit\
193
+ \ us: www.msrcosmos.com"
194
+ - source_sentence: CPG data analysis, Nielsen IRI expertise, Power Query dashboard
195
+ development
196
+ sentences:
197
+ - 'Skills :
198
+
199
+ a) Azure Data Factory – Min 3 years of project experiencea. Design of pipelinesb.
200
+ Use of project with On-prem to Cloud Data Migrationc. Understanding of ETLd. Change
201
+ Data Capture from Multiple Sourcese. Job Schedulingb) Azure Data Lake – Min 3
202
+ years of project experiencea. All steps from design to deliverb. Understanding
203
+ of different Zones and design principalc) Data Modeling experience Min 5 Yearsa.
204
+ Data Mart/Warehouseb. Columnar Data design and modelingd) Reporting using PowerBI
205
+ Min 3 yearsa. Analytical Reportingb. Business Domain Modeling and data dictionary
206
+
207
+ Interested please apply to the job, looking only for W2 candidates.'
208
+ - "experienced and highly skilled Sr Data Engineer to join us. This role requires\
209
+ \ a seasoned professional with a deep understanding of automated data pipelines,\
210
+ \ cloud infrastructure, databases, and workflow engines. The ideal candidate will\
211
+ \ have a minimum of 5 years of technical lead experience in the medical device\
212
+ \ field and at least 7 years of experience in data engineering. Proficiency in\
213
+ \ Python and a proven track record of leading projects to completion are essential.\n\
214
+ \nPrimary Duties\n\nDesign, develop, and manage robust, secure, scalable, highly\
215
+ \ available, and dynamic solutions to drive business objectives. Lead the architecture\
216
+ \ and implementation of advanced cloud-based data engineering solutions, leveraging\
217
+ \ AWS technologies and best practices. Manage and optimize data pipelines, ensuring\
218
+ \ timely and accurate data availability for analytics and machine learning applications.\
219
+ \ Oversee the administration and performance tuning of databases and workflow\
220
+ \ engines. Collaborate with cross-functional teams (e.g., product management,\
221
+ \ IT, software engineering) to define data requirements, integrate systems, and\
222
+ \ implement data governance and security policies. Mentor junior data engineers\
223
+ \ and oversee the team's development efforts, promoting best practices in coding,\
224
+ \ architecture, and data management. Stay abreast of emerging technologies and\
225
+ \ trends in data engineering, cloud services, and the medical device industry\
226
+ \ to drive innovation and competitive advantage. \n\nKnowledge, Experience & Skills\n\
227
+ \nDegree in Computer Science, Engineering, Information Systems, or a related field.\
228
+ \ Requiring a minimum of Bachelor’s degree +7yrs of experience or a Master’s degree\
229
+ \ +5yrs of experience. Minimum of 7 years of experience in data engineering, with\
230
+ \ expertise in developing and managing automated data pipelines, AWS cloud infrastructure,\
231
+ \ databases, and workflow engines. Certifications in AWS and data engineering\
232
+ \ preferred. Experience with machine learning algorithms and data modeling techniques.\
233
+ \ At least 5 years of experience in the medical device IVD industry, with a strong\
234
+ \ understanding of FDA regulatory standards and compliance requirements. Expert\
235
+ \ proficiency in Python programming and software engineering principles. Demonstrated\
236
+ \ experience with AWS services (e.g., EC2, RDS, S3, Lambda, Glue, Redshift, Athena,\
237
+ \ EMR) and data pipeline tools (e.g., Apache Airflow, Luigi, etc). Strong knowledge\
238
+ \ of database management (Postgres and Snowflake), SQL, and NoSQL databases. Adept\
239
+ \ at queries, report writing and presenting findings Experienced in developing\
240
+ \ and maintaining ETL pipelines in a cloud environmentExperienced in Unit Testing\
241
+ \ preferred Strong analytical skills with the ability to organize, analyze, and\
242
+ \ disseminate information with attention to detail and accuracy Excellent communication\
243
+ \ and task management skills. Comfort working in a dynamic, fast-paced, research-oriented\
244
+ \ group with several ongoing concurrent projectsFull fluency (verbal and written)\
245
+ \ of the English language is a must. \n\nThe estimated salary range for this role\
246
+ \ based in California is between $148,700 and $178,400 annually. This role is\
247
+ \ eligible to receive a variable annual bonus based on company, team, and individual\
248
+ \ performance per bioMerieux’s bonus program. This range may differ from ranges\
249
+ \ offered for similar positions elsewhere in the country given differences in\
250
+ \ cost of living. Actual compensation within this range is determined based on\
251
+ \ the successful candidate’s experience and will be presented in writing at the\
252
+ \ time of the offer.\n\nIn addition, bioMérieux offers a competitive Total Rewards\
253
+ \ package that may include:\n\nA choice of medical (including prescription), dental,\
254
+ \ and vision plans providing nationwide coverage and telemedicine optionsCompany-Provided\
255
+ \ Life and Accidental Death InsuranceShort and Long-Term Disability InsuranceRetirement\
256
+ \ Plan including a generous non-discretionary employer contribution and employer\
257
+ \ match. Adoption AssistanceWellness ProgramsEmployee Assistance ProgramCommuter\
258
+ \ BenefitsVarious voluntary benefit offeringsDiscount programsParental leaves\n\
259
+ \nBioFire Diagnostics, LLC. is an Equal Opportunity/Affirmative Action Employer.\
260
+ \ All qualified applicants will receive consideration for employment without regard\
261
+ \ to race, color, religion, sex, sexual orientation, gender identity, national\
262
+ \ origin, age, protected veteran or disabled status, or genetic information.\n\
263
+ \nPlease be advised that the receipt of satisfactory responses to reference requests\
264
+ \ and the provision of satisfactory proof of an applicant’s identity and legal\
265
+ \ authorization to work in the United States are required of all new hires. Any\
266
+ \ misrepresentation, falsification, or material omission may result in the failure\
267
+ \ to receive an offer, the retraction of an offer, or if already hired, dismissal.\
268
+ \ If you are a qualified individual with a disability, you may request a reasonable\
269
+ \ accommodation in BioFire Diagnostics’ application process by contacting us via\
270
+ \ telephone at (385) 770-1132, by email at [email protected], or by dialing 711\
271
+ \ for access to Telecommunications Relay Services (TRS)."
272
+ - "requirements into analytical frameworks.Dashboard Development: Design and maintain\
273
+ \ dashboards using Power Query in Excel, good in analytics in generating metrics\
274
+ \ & measures and ensuring accurate and real-time data representation. \nRequired\
275
+ \ QualificationsProfessional Experience: 3-6 years as a business analyst, with\
276
+ \ mandatory experience in the CPG sector and should have worked on brand dataTechnical\
277
+ \ Proficiency: Advanced skills in Excel and Power Query;Communication Skills:\
278
+ \ Exceptional ability to communicate complex data insights to non-technical stakeholders.Location:\
279
+ \ Position based in Springdale. Preferred AttributesProven experience in data-driven\
280
+ \ decision-making processes.Ability to handle multiple projects simultaneously,\
281
+ \ with a focus on deadlines and results."
282
+ - source_sentence: ETL Pipelines, Apache Spark, AirFlow
283
+ sentences:
284
+ - "Qualifications\n\n - Currently enrolled in a Bachelor’s or Master’s degree in\
285
+ \ Software Development, Computer Science, Computer Engineering, or a related technical\
286
+ \ discipline\n- Must obtain work authorization in country of employment at the\
287
+ \ time of hire, and maintain ongoing work authorization during employment.\n\n\
288
+ Preferred Qualifications: \n- Fluency in SQL or other programming languages (Python,\
289
+ \ R etc) for data manipulation\n- Ability to thrive in a fast paced work environment\
290
+ \ \n- Ability to drive projects to completion with minimal guidance\n- Ability\
291
+ \ to communicate the results of analyses in a clear and effective manner\n\nTikTok\
292
+ \ is committed to creating an inclusive space where employees are valued for their\
293
+ \ skills, experiences, and unique perspectives. Our platform connects people from\
294
+ \ across the globe and so does our workplace. At TikTok, our mission is to inspire\
295
+ \ creativity and bring joy. To achieve that goal, we are committed to celebrating\
296
+ \ our diverse voices and to creating an environment that reflects the many communities\
297
+ \ we reach. We are passionate about this and hope you are too.\n\nTikTok is committed\
298
+ \ to providing reasonable accommodations in our recruitment processes for candidates\
299
+ \ with disabilities, pregnancy, sincerely held religious beliefs or other reasons\
300
+ \ protected by applicable laws. If you need assistance or a reasonable accommodation,\
301
+ \ please reach out to us at https://shorturl.at/cdpT2\n\nBy submitting an application\
302
+ \ for this role, you accept and agree to our global applicant privacy policy,\
303
+ \ which may be accessed here: https://careers.tiktok.com/legal/privacy. \n\nJob\
304
+ \ Information:\n\n【For Pay Transparency】Compensation Description (annually) The\
305
+ \ base salary range for this position in the selected city is $45 - $45annually.\
306
+ \ We cover 100% premium coverage for Full-Time intern medical insurance after\
307
+ \ 90 days from the date of hire. Medical coverage only, no dental or vision coverage.Our\
308
+ \ time off and leave plans are: Paid holidays and paid sick leave. The sick leave\
309
+ \ entitlement is based on the time you join.We also provide mental and emotional\
310
+ \ health benefits through our Employee Assistance Program and provide reimbursements\
311
+ \ for your mobile phone expense. The Company reserves the right to modify or change\
312
+ \ these benefits programs at any time, with or without notice."
313
+ - "Experience as a Product Data Analyst at TGG:Achieving business results as a client\
314
+ \ facing consultant for our clients in various types of engagements within a variety\
315
+ \ of industries.Delivering high quality work to our clients within our technology\
316
+ \ service line. Being part of a collaborative, values-based firm that has a reputation\
317
+ \ for great work and satisfied clients.Working with senior IT leaders to communicate\
318
+ \ strategic goals to their organization, including leading client and internal\
319
+ \ development teams on best practices.\nWhat You Will Work On:Analyze large datasets\
320
+ \ to identify patterns, trends, and opportunities for product optimization.Develop\
321
+ \ and maintain dashboards and reports to track key performance metrics.Collaborate\
322
+ \ with product managers, marketers, and engineers to ideate, prioritize, and implement\
323
+ \ data-driven initiatives.Conduct A/B testing and other statistical analyses to\
324
+ \ evaluate the effectiveness of product changes.Communicate findings and recommendations\
325
+ \ to stakeholders through clear and concise presentations.Contribute analytical\
326
+ \ insights to inform product vision and deliver value.\nWho Will You Work With:Client\
327
+ \ stakeholders ranging from individual contributors to senior executives.A collaborative\
328
+ \ team of consultants that deliver outstanding client service.TGG partners, principals,\
329
+ \ account leaders, managers, and staff supporting you to excel within client projects\
330
+ \ and to achieve your professional development goals.\nExamples of What You Bring\
331
+ \ to the Table:You have strong analysis capabilities and thrive on working collaboratively\
332
+ \ to deliver successful results for clients. You have experience with these technologies:Proficiency\
333
+ \ in SQL and Python for data extraction, manipulation, and analysis.Strong understanding\
334
+ \ of statistical concepts and techniques.Intermediate experience with Tableau,\
335
+ \ Power BI, Adobe Analytics, or similar BI tools.Ability to analyze requirements,\
336
+ \ design, implement, debug, and deploy Cloud Platform services and components.At\
337
+ \ least basic exposure to data science and machine learning methods.Familiarity\
338
+ \ with source control best practices: Define, Setup/Configure, Deploy and Maintain\
339
+ \ source code (e.g. GIT, VisualSafe Source).Ability to develop and schedule processes\
340
+ \ to extract, transform, and store data from these systems: SQL databases, Azure\
341
+ \ cloud services, Google cloud service, Snowflake.4-8 years of relevant experience.Bachelor’s\
342
+ \ degree in Computer Science, Statistics, Economics, Mathematics, or a related\
343
+ \ field; or equivalent combination of education, training, and experience.Analytical\
344
+ \ Product Mindset: Ability to approach problems analytically and derive actionable\
345
+ \ insights from complex datasets, while remaining focused on providing value to\
346
+ \ customers Strategic Thinking: Demonstrated ability to translate data findings\
347
+ \ into strategic, achievable recommendations to drive business outcomes.Communication\
348
+ \ Skills: Excellent verbal and written communication skills.Ability to effectively\
349
+ \ convey technical concepts from technical to non-technical stakeholders and vice-versa.Team\
350
+ \ Player: Proven track record of collaborating effectively with cross-functional\
351
+ \ teams in a fast-paced environment.Adaptability: Have consistently demonstrated\
352
+ \ the ability to bring structure to complex, unstructured environments.Familiarity\
353
+ \ with Agile development methodologies.Ability to adapt to changing priorities\
354
+ \ to thrive in dynamic work environments.\nSalary and Benefits:Nothing is more\
355
+ \ important to us than the well-being of our team. That is why we are proud to\
356
+ \ offer a full suite of competitive health benefits along with additional benefits\
357
+ \ such as: flexible PTO, a professional development stipend and work from home\
358
+ \ stipend, volunteer opportunities, and team social activities.\nSalaries vary\
359
+ \ and are dependent on considerations such as: experience and specific skills/certifications.\
360
+ \ The base plus target bonus total compensation range for this role is $95,000\
361
+ \ - $125,000. Additional compensation beyond this range is available as a result\
362
+ \ of leadership and business development opportunities. Salary details are discussed\
363
+ \ openly during the hiring process. \nWork Environment:TGG is headquartered in\
364
+ \ Portland, Oregon, and has team members living in various locations across the\
365
+ \ United States. Our consultants must have the ability to travel and to work remotely\
366
+ \ or onsite. Each engagement has unique conditions, and we work collaboratively\
367
+ \ to meet both our client and team's needs regarding onsite and travel requirements.\
368
+ \ \nWhy The Gunter Group:TGG was created to be different, to be relational, to\
369
+ \ be insightful, and to maximize potential for our consultants, our clients, and\
370
+ \ our community. We listen first so we can learn, analyze, and deliver meaningful\
371
+ \ solutions for our clients. Our compass points towards our people and our “Non-Negotiables”\
372
+ \ always. Our driven employees make us who we are — a talented team of leaders\
373
+ \ with deep and diverse professional experience.If you think this role is the\
374
+ \ right fit, please submit your resume and cover letter so we can learn more about\
375
+ \ you. \nThe Gunter Group LLC is"
376
+ - 'Requirements & Day-to-Day: Design, develop, and support scalable data processing
377
+ pipelines using Apache Spark and Java/Scala. Lead a talented team and make a significant
378
+ impact on our data engineering capabilities. Implement and manage workflow orchestration
379
+ with AirFlow for efficient data processing. Proficiently use SQL for querying
380
+ and data manipulation tasks. Collaborate with cross-functional teams to gather
381
+ requirements and ensure alignment with data engineering solutions. Essential
382
+ Criteria: a bachelor’s degree in computer science or another relevant discipline,
383
+ and a minimum of five years of relevant experience in data engineering. Solid
384
+ experience with Apache Spark for large-scale data processing. Proficiency in Java
385
+ or Scala programming languages. Strong knowledge of AirFlow for workflow orchestration.
386
+ Proficient in SQL for data querying and manipulation.'
387
+ - source_sentence: Data organization, document analysis, records management
388
+ sentences:
389
+ - "skills and build your career in a rapidly evolving business climate? Are you\
390
+ \ looking for a career where professional development is embedded in your employer’s\
391
+ \ core culture? If so, Chenega Military, Intelligence & Operations Support (MIOS)\
392
+ \ could be the place for you! Join our team of professionals who support large-scale\
393
+ \ government operations by leveraging cutting-edge technology and take your career\
394
+ \ to the next level!\n\nAs one of the newest Chenega companies, Chenega Defense\
395
+ \ & Aerospace Solutions (CDAS) was developed with the purpose of providing expert\
396
+ \ Engineering and Technical Support Services to federal customers.\n\nThe Data\
397
+ \ Analyst will analyze a large variety of documents to ensure proper placement\
398
+ \ in physical files, perform high-level scanning of master file documents to convert\
399
+ \ them into an electronic format, and provide meticulous organization and management\
400
+ \ of case files, including sorting and categorizing documents before scanning.\n\
401
+ \nResponsibilities\n\nWork within the Standard Operating Procedure for the organization\
402
+ \ of physical files containing documents of various types Establish or maintain\
403
+ \ physical files, including proper placement of documents as they are createdDisseminate\
404
+ \ significant amounts of information with attention to detail and accuracyPerform\
405
+ \ word processing tasksPerform data entry and metadata entry for electronic documentsReconcile\
406
+ \ inconsistenciesGather information and organize investigative packages, case\
407
+ \ files, or presentationsObtain additional information from other investigative\
408
+ \ agencies or databasesVerify information and files against the tracking systemMaintain\
409
+ \ internal status information on the disposition of designated information and\
410
+ \ filesDistribute and receive documentsAssist analyst or government official in\
411
+ \ obtaining or collecting all documents or information to complete case fileProvide\
412
+ \ administrative information and assistance concerning the case or files to other\
413
+ \ agencies or organizationsOther duties as assigned\n\n\nQualifications\n\nHigh\
414
+ \ school diploma or GED equivalent required Must have resided in the United States\
415
+ \ for at least three out of the last five years or worked for the U.S. in a foreign\
416
+ \ country as either an employee or contractor in a federal or military capacity\
417
+ \ for at least three of the last five yearsHaving your own Personally Owned Vehicle\
418
+ \ (POV) is requiredPossess a demonstrated ability to analyze documents to extract\
419
+ \ informationGood oral and written communication skillsHave hands-on familiarity\
420
+ \ with a variety of computer applications,Must have a working knowledge of a variety\
421
+ \ of computer software applications in word processing, spreadsheets, databases,\
422
+ \ presentation software (MS Word, Excel, PowerPoint), and OutlookA valid driver’s\
423
+ \ license is requiredTop Secret clearance required \n\n\nKnowledge, Skills, And\
424
+ \ Abilities\n\nPossess a demonstrated ability to analyze documents to extract\
425
+ \ informationGood oral and written communication skillsHave hands-on familiarity\
426
+ \ with a variety of computer applications, including word processing, database,\
427
+ \ spreadsheet, and telecommunications softwareMust be a team playerMust be able\
428
+ \ to work independently and with USMS staff to interpret data rapidly and accurately\
429
+ \ for proper execution in a records management databaseMust have a working knowledge\
430
+ \ of a variety of computer software applications in word processing, spreadsheets,\
431
+ \ databases, presentation software (MS Word, Excel, Access, PowerPoint), and OutlookAbility\
432
+ \ to work independently on tasks be a self-starter and complete projects with\
433
+ \ a team as they ariseAttention to detail and the ability to direct the work of\
434
+ \ others efficiently and effectivelyAbility to consistently deliver high-quality\
435
+ \ work under extreme pressureAbility to work shiftworkAbility to lift and move\
436
+ \ boxes up to 25 pounds, including frequently utilizing hands, arms, and legs\
437
+ \ for file placement and removalExperience with scanning software\n\n\nHow You’ll\
438
+ \ Grow\n\nAt Chenega MIOS, our professional development plan focuses on helping\
439
+ \ our team members at every level of their career to identify and use their strengths\
440
+ \ to do their best work every day. From entry-level employees to senior leaders,\
441
+ \ we believe there’s always room to learn.\n\nWe offer opportunities to help sharpen\
442
+ \ skills in addition to hands-on experience in the global, fast-changing business\
443
+ \ world. From on-the-job learning experiences to formal development programs,\
444
+ \ our professionals have a variety of opportunities to continue to grow throughout\
445
+ \ their careers.\n\nBenefits\n\nAt Chenega MIOS, we know that great people make\
446
+ \ a great organization. We value our team members and offer them a broad range\
447
+ \ of benefits.\n\nLearn more about what working at Chenega MIOS can mean for you.\n\
448
+ \nChenega MIOS’s culture\n\nOur positive and supportive culture encourages our\
449
+ \ team members to do their best work every day. We celebrate individuals by recognizing\
450
+ \ their uniqueness and offering them the flexibility to make daily choices that\
451
+ \ can help them be healthy, centered, confident, and aware. We offer well-being\
452
+ \ programs and continuously look for new ways to maintain a culture where we excel\
453
+ \ and lead healthy, happy lives.\n\nCorporate citizenship\n\nChenega MIOS is led\
454
+ \ by a purpose to make an impact that matters. This purpose defines who we are\
455
+ \ and extends to relationships with our clients, our team members, and our communities.\
456
+ \ We believe that business has the power to inspire and transform. We focus on\
457
+ \ education, giving, skill-based volunteerism, and leadership to help drive positive\
458
+ \ social impact in our communities.\n\nLearn more about Chenega’s impact on the\
459
+ \ world.\n\nChenega MIOS News- https://chenegamios.com/news/\n\nTips from your\
460
+ \ Talent Acquisition team\n\nWe Want Job Seekers Exploring Opportunities At Chenega\
461
+ \ MIOS To Feel Prepared And Confident. To Help You With Your Research, We Suggest\
462
+ \ You Review The Following Links\n\nChenega MIOS web site - www.chenegamios.com\n\
463
+ \nGlassdoor - https://www.glassdoor.com/Overview/Working-at-Chenega-MIOS-EI_IE369514.11,23.htm\n\
464
+ \nLinkedIn - https://www.linkedin.com/company/1472684/\n\nFacebook - https://www.facebook.com/chenegamios/\n\
465
+ \n#DICE\n\n#Chenega Defense & Aerospace Solutions, LLC"
466
+ - "Qualifications\n Data Engineering, Data Modeling, and ETL (Extract Transform\
467
+ \ Load) skillsData Warehousing and Data Analytics skillsExperience with data-related\
468
+ \ tools and technologiesStrong problem-solving and analytical skillsExcellent\
469
+ \ written and verbal communication skillsAbility to work independently and remotelyExperience\
470
+ \ with cloud platforms (e.g., AWS, Azure) is a plusBachelor's degree in Computer\
471
+ \ Science, Information Systems, or related field"
472
+ - skills will be difficult. The more aligned skills they have, the better.Organizational
473
+ Structure And Impact:Describe the function your group supports from an LOB perspective:Experienced
474
+ ML engineer to work on universal forecasting models. Focus on ML forecasting,
475
+ Python and Hadoop. Experience with Python, ARIMA, FB Prophet, Seasonal Naive,
476
+ Gluon.Data Science Innovation (DSI) is a very unique application. It is truly
477
+ ML-driven at its heart and our forecasting models originally looked singularly
478
+ at cash balance forecasting. That has all changed as we have now incorporated
479
+ approximately 100 additional financial metrics from our new DSI Metrics Farm.
480
+ This allows future model executions to become a Universal Forecasting Model instead
481
+ of being limited to just cash forecasting. It’s a very exciting application, especially
482
+ since the models have been integrated within a Marketplace concept UI that allows
483
+ Subscriber/Contributor functionality to make information and processing more personal
484
+ and with greater extensibility across the enterprise. The application architecture
485
+ is represented by OpenShift, Linux, Oracle, SQL Server, Hadoop, MongoDB, APIs,
486
+ and a great deal of Python code.Describe the current initiatives that this resource
487
+ will be impacting:Working toward implementation of Machine Learning Services.Team
488
+ Background and Preferred Candidate History:Do you only want candidates with a
489
+ similar background or would you like to see candidates with a diverse industry
490
+ background?Diverse industry background, finance background preferred. Manager
491
+ is more focused on the skillset.Describe the dynamic of your team and where this
492
+ candidate will fit into the overall environment:This person will work with a variety
493
+ of titles including application architects, web engineers, data engineers, data
494
+ scientists, application system managers, system integrators, and Quality Engineers.Will
495
+ work with various teams, but primarily working with one core team - approx 15
496
+ - onshore and offshore resources.Candidate Technical and skills profile:Describe
497
+ the role and the key responsibilities in order of which they will be doing daily:Machine
498
+ Learning Engineer that work with Data Scientists in a SDLC environment into production.Interviews:Describe
499
+ interview process (who will be involved, how many interviews, etc.):1 round -
500
+ 1 hour minimum, panel style
501
+ datasets:
502
+ - ShushantLLM/ai-job-embedding-finetuning
503
+ pipeline_tag: sentence-similarity
504
+ library_name: sentence-transformers
505
+ metrics:
506
+ - cosine_accuracy
507
+ model-index:
508
+ - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
509
+ results:
510
+ - task:
511
+ type: triplet
512
+ name: Triplet
513
+ dataset:
514
+ name: ai job validation
515
+ type: ai-job-validation
516
+ metrics:
517
+ - type: cosine_accuracy
518
+ value: 0.9801980257034302
519
+ name: Cosine Accuracy
520
+ - task:
521
+ type: triplet
522
+ name: Triplet
523
+ dataset:
524
+ name: ai job test
525
+ type: ai-job-test
526
+ metrics:
527
+ - type: cosine_accuracy
528
+ value: 0.9607843160629272
529
+ name: Cosine Accuracy
530
+ ---
531
+
532
+ # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
533
+
534
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [ai-job-embedding-finetuning](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
535
+
536
+ ## Model Details
537
+
538
+ ### Model Description
539
+ - **Model Type:** Sentence Transformer
540
+ - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 4328cf26390c98c5e3c738b4460a05b95f4911f5 -->
541
+ - **Maximum Sequence Length:** 128 tokens
542
+ - **Output Dimensionality:** 768 dimensions
543
+ - **Similarity Function:** Cosine Similarity
544
+ - **Training Dataset:**
545
+ - [ai-job-embedding-finetuning](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning)
546
+ <!-- - **Language:** Unknown -->
547
+ <!-- - **License:** Unknown -->
548
+
549
+ ### Model Sources
550
+
551
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
552
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
553
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
554
+
555
+ ### Full Model Architecture
556
+
557
+ ```
558
+ SentenceTransformer(
559
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
560
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
561
+ )
562
+ ```
563
+
564
+ ## Usage
565
+
566
+ ### Direct Usage (Sentence Transformers)
567
+
568
+ First install the Sentence Transformers library:
569
+
570
+ ```bash
571
+ pip install -U sentence-transformers
572
+ ```
573
+
574
+ Then you can load this model and run inference.
575
+ ```python
576
+ from sentence_transformers import SentenceTransformer
577
+
578
+ # Download from the 🤗 Hub
579
+ model = SentenceTransformer("ShushantLLM/paraphrase-multilingual-mpnet-base-v2")
580
+ # Run inference
581
+ queries = [
582
+ "Data organization, document analysis, records management",
583
+ ]
584
+ documents = [
585
+ 'skills and build your career in a rapidly evolving business climate? Are you looking for a career where professional development is embedded in your employer’s core culture? If so, Chenega Military, Intelligence & Operations Support (MIOS) could be the place for you! Join our team of professionals who support large-scale government operations by leveraging cutting-edge technology and take your career to the next level!\n\nAs one of the newest Chenega companies, Chenega Defense & Aerospace Solutions (CDAS) was developed with the purpose of providing expert Engineering and Technical Support Services to federal customers.\n\nThe Data Analyst will analyze a large variety of documents to ensure proper placement in physical files, perform high-level scanning of master file documents to convert them into an electronic format, and provide meticulous organization and management of case files, including sorting and categorizing documents before scanning.\n\nResponsibilities\n\nWork within the Standard Operating Procedure for the organization of physical files containing documents of various types Establish or maintain physical files, including proper placement of documents as they are createdDisseminate significant amounts of information with attention to detail and accuracyPerform word processing tasksPerform data entry and metadata entry for electronic documentsReconcile inconsistenciesGather information and organize investigative packages, case files, or presentationsObtain additional information from other investigative agencies or databasesVerify information and files against the tracking systemMaintain internal status information on the disposition of designated information and filesDistribute and receive documentsAssist analyst or government official in obtaining or collecting all documents or information to complete case fileProvide administrative information and assistance concerning the case or files to other agencies or organizationsOther duties as assigned\n\n\nQualifications\n\nHigh school diploma or GED equivalent required Must have resided in the United States for at least three out of the last five years or worked for the U.S. in a foreign country as either an employee or contractor in a federal or military capacity for at least three of the last five yearsHaving your own Personally Owned Vehicle (POV) is requiredPossess a demonstrated ability to analyze documents to extract informationGood oral and written communication skillsHave hands-on familiarity with a variety of computer applications,Must have a working knowledge of a variety of computer software applications in word processing, spreadsheets, databases, presentation software (MS Word, Excel, PowerPoint), and OutlookA valid driver’s license is requiredTop Secret clearance required \n\n\nKnowledge, Skills, And Abilities\n\nPossess a demonstrated ability to analyze documents to extract informationGood oral and written communication skillsHave hands-on familiarity with a variety of computer applications, including word processing, database, spreadsheet, and telecommunications softwareMust be a team playerMust be able to work independently and with USMS staff to interpret data rapidly and accurately for proper execution in a records management databaseMust have a working knowledge of a variety of computer software applications in word processing, spreadsheets, databases, presentation software (MS Word, Excel, Access, PowerPoint), and OutlookAbility to work independently on tasks be a self-starter and complete projects with a team as they ariseAttention to detail and the ability to direct the work of others efficiently and effectivelyAbility to consistently deliver high-quality work under extreme pressureAbility to work shiftworkAbility to lift and move boxes up to 25 pounds, including frequently utilizing hands, arms, and legs for file placement and removalExperience with scanning software\n\n\nHow You’ll Grow\n\nAt Chenega MIOS, our professional development plan focuses on helping our team members at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there’s always room to learn.\n\nWe offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their careers.\n\nBenefits\n\nAt Chenega MIOS, we know that great people make a great organization. We value our team members and offer them a broad range of benefits.\n\nLearn more about what working at Chenega MIOS can mean for you.\n\nChenega MIOS’s culture\n\nOur positive and supportive culture encourages our team members to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them be healthy, centered, confident, and aware. We offer well-being programs and continuously look for new ways to maintain a culture where we excel and lead healthy, happy lives.\n\nCorporate citizenship\n\nChenega MIOS is led by a purpose to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our team members, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities.\n\nLearn more about Chenega’s impact on the world.\n\nChenega MIOS News- https://chenegamios.com/news/\n\nTips from your Talent Acquisition team\n\nWe Want Job Seekers Exploring Opportunities At Chenega MIOS To Feel Prepared And Confident. To Help You With Your Research, We Suggest You Review The Following Links\n\nChenega MIOS web site - www.chenegamios.com\n\nGlassdoor - https://www.glassdoor.com/Overview/Working-at-Chenega-MIOS-EI_IE369514.11,23.htm\n\nLinkedIn - https://www.linkedin.com/company/1472684/\n\nFacebook - https://www.facebook.com/chenegamios/\n\n#DICE\n\n#Chenega Defense & Aerospace Solutions, LLC',
586
+ 'skills will be difficult. The more aligned skills they have, the better.Organizational Structure And Impact:Describe the function your group supports from an LOB perspective:Experienced ML engineer to work on universal forecasting models. Focus on ML forecasting, Python and Hadoop. Experience with Python, ARIMA, FB Prophet, Seasonal Naive, Gluon.Data Science Innovation (DSI) is a very unique application. It is truly ML-driven at its heart and our forecasting models originally looked singularly at cash balance forecasting. That has all changed as we have now incorporated approximately 100 additional financial metrics from our new DSI Metrics Farm. This allows future model executions to become a Universal Forecasting Model instead of being limited to just cash forecasting. It’s a very exciting application, especially since the models have been integrated within a Marketplace concept UI that allows Subscriber/Contributor functionality to make information and processing more personal and with greater extensibility across the enterprise. The application architecture is represented by OpenShift, Linux, Oracle, SQL Server, Hadoop, MongoDB, APIs, and a great deal of Python code.Describe the current initiatives that this resource will be impacting:Working toward implementation of Machine Learning Services.Team Background and Preferred Candidate History:Do you only want candidates with a similar background or would you like to see candidates with a diverse industry background?Diverse industry background, finance background preferred. Manager is more focused on the skillset.Describe the dynamic of your team and where this candidate will fit into the overall environment:This person will work with a variety of titles including application architects, web engineers, data engineers, data scientists, application system managers, system integrators, and Quality Engineers.Will work with various teams, but primarily working with one core team - approx 15 - onshore and offshore resources.Candidate Technical and skills profile:Describe the role and the key responsibilities in order of which they will be doing daily:Machine Learning Engineer that work with Data Scientists in a SDLC environment into production.Interviews:Describe interview process (who will be involved, how many interviews, etc.):1 round - 1 hour minimum, panel style',
587
+ "Qualifications\n Data Engineering, Data Modeling, and ETL (Extract Transform Load) skillsData Warehousing and Data Analytics skillsExperience with data-related tools and technologiesStrong problem-solving and analytical skillsExcellent written and verbal communication skillsAbility to work independently and remotelyExperience with cloud platforms (e.g., AWS, Azure) is a plusBachelor's degree in Computer Science, Information Systems, or related field",
588
+ ]
589
+ query_embeddings = model.encode_query(queries)
590
+ document_embeddings = model.encode_document(documents)
591
+ print(query_embeddings.shape, document_embeddings.shape)
592
+ # [1, 768] [3, 768]
593
+
594
+ # Get the similarity scores for the embeddings
595
+ similarities = model.similarity(query_embeddings, document_embeddings)
596
+ print(similarities)
597
+ # tensor([[ 0.0065, 0.0405, -0.2204]])
598
+ ```
599
+
600
+ <!--
601
+ ### Direct Usage (Transformers)
602
+
603
+ <details><summary>Click to see the direct usage in Transformers</summary>
604
+
605
+ </details>
606
+ -->
607
+
608
+ <!--
609
+ ### Downstream Usage (Sentence Transformers)
610
+
611
+ You can finetune this model on your own dataset.
612
+
613
+ <details><summary>Click to expand</summary>
614
+
615
+ </details>
616
+ -->
617
+
618
+ <!--
619
+ ### Out-of-Scope Use
620
+
621
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
622
+ -->
623
+
624
+ ## Evaluation
625
+
626
+ ### Metrics
627
+
628
+ #### Triplet
629
+
630
+ * Datasets: `ai-job-validation` and `ai-job-test`
631
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
632
+
633
+ | Metric | ai-job-validation | ai-job-test |
634
+ |:--------------------|:------------------|:------------|
635
+ | **cosine_accuracy** | **0.9802** | **0.9608** |
636
+
637
+ <!--
638
+ ## Bias, Risks and Limitations
639
+
640
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
641
+ -->
642
+
643
+ <!--
644
+ ### Recommendations
645
+
646
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
647
+ -->
648
+
649
+ ## Training Details
650
+
651
+ ### Training Dataset
652
+
653
+ #### ai-job-embedding-finetuning
654
+
655
+ * Dataset: [ai-job-embedding-finetuning](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning) at [1de228a](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning/tree/1de228a8cb18a24605027066b73f54957a2b9ce0)
656
+ * Size: 810 training samples
657
+ * Columns: <code>query</code>, <code>job_description_pos</code>, and <code>job_description_neg</code>
658
+ * Approximate statistics based on the first 810 samples:
659
+ | | query | job_description_pos | job_description_neg |
660
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
661
+ | type | string | string | string |
662
+ | details | <ul><li>min: 9 tokens</li><li>mean: 17.49 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 121.41 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 122.26 tokens</li><li>max: 128 tokens</li></ul> |
663
+ * Samples:
664
+ | query | job_description_pos | job_description_neg |
665
+ |:-----------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
666
+ | <code>Senior Data Analyst, monitoring systems, dashboard development, statistical analysis</code> | <code>experience where you can also make an impact on your community. While safety is a serious business, we are a supportive team that is optimizing the remote experience to create strong and fulfilling relationships even when we are physically apart. Our group of hard-working employees thrive in a positive and inclusive environment, where a bias towards action is rewarded.<br><br>We have raised over $380M in venture capital from investors including Tiger Global, Andreessen Horowitz, Matrix Partners, Meritech Capital Partners, and Initialized Capital. Now surpassing a $3B valuation, Flock is scaling intentionally and seeking the best and brightest to help us meet our goal of reducing crime in the United States by 25% in the next three years.<br><br>The Opportunity<br><br>As a Senior Data Analyst on the ML team, you will be responsible for extracting insights aggregated from various data sources, developing dashboards to identify trends and patterns that highlight model performance issues, performing analysis...</code> | <code>SKILLS and EXPERIENCE:3-5+ years of experience domain knowledge with either support of core Banking application experience, Mortgage Servicing or Loan Originations or personal or auto loans within Finance Industry environmentAble to interact with the VP or C-level Business Executives and higher to gather requirements and collaborate with IT; working effectively and independently as well as be collaborative team-oriented team player.Ideally supported Mortgage servicing systems such as Black Knight’s MSP, Sagent, Finastra’s Fusion Servicing Director, Interlinq Loan Servicing (ILS) or other loan servicing platform OR support of other core banking or originations platformSome experience with the following core technologies: T-SQL; SQL Server 2016 or higher; Visual Studio 2017 or higher; SQL Server Data Tools; Team Foundation ServerWorking knowledge of T-SQL programming and scripting, as well as optimization techniques· 3 years of experience with a strong focus on SQL Relational databases, ...</code> |
667
+ | <code>advanced analytics, financial strategy, data visualization</code> | <code>skills and business acumen to drive impactful results that inform strategic decisions.Commitment to iterative development, with a proven ability to engage and update stakeholders bi-weekly or as necessary, ensuring alignment, feedback incorporation, and transparency throughout the project lifecycle.Project ownership and development from inception to completion, encompassing tasks such as gathering detailed requirements, data preparation, model creation, result generation, and data visualization. Develop insights, methods or tools using various analytic methods such as causal-model approaches, predictive modeling, regressions, machine learning, time series analysis, etc.Handle large amounts of data from multiple and disparate sources, employing advanced Python and SQL techniques to ensure efficiency and accuracyUphold the highest standards of data integrity and security, aligning with both internal and external regulatory requirements and compliance protocols<br><br>Required Qualifications, C...</code> | <code>experience Life at Visa.<br><br>Job Description<br><br>About the Team:<br><br>VISA is the leader in the payment industry and has been for a long time, but we are also quickly transitioning into a technology company that is fostering an environment for applying the newest technology to solve exciting problems in this area. For a payment system to work well, the risk techniques, performance, and scalability are critical. These techniques and systems benefit from big data, data mining, artificial intelligence, machine learning, cloud computing, & many other advance technologies. At VISA, we have all of these. If you want to be on the cutting edge of the payment space, learn fast, and make a big impact, then the Artificial Intelligence Platform team may be an ideal place for you!<br><br>Our team needs a Senior Data Engineer with proven knowledge of web application and web service development who will focus on creating new capabilities for the AI Platform while maturing our code base and development processes. You...</code> |
668
+ | <code>Clinical Operations data analysis, eTMF, EDC implementation, advanced analytics visualization</code> | <code>requirements, and objectives for Clinical initiatives Technical SME for system activities for the clinical system(s), enhancements, and integration projects. Coordinates support activities across vendor(s) Systems include but are not limited to eTMF, EDC, CTMS and Analytics Interfaces with external vendors at all levels to manage the relationship and ensure the proper delivery of services Document Data Transfer Agreements for Data Exchange between BioNTech and Data Providers (CRO, Partner Organizations) Document Data Transformation logic and interact with development team to convert business logic into technical details <br><br> What you have to offer: <br><br> Bachelor’s or higher degree in a scientific discipline (e.g., computer science/information systems, engineering, mathematics, natural sciences, medical, or biomedical science) Extensive experience/knowledge of technologies and trends including Visualizations /Advanced Analytics Outstanding analytical skills and result orientation Ab...</code> | <code>Requirements<br><br>Typically requires 13+ years of professional experience and 6+ years of diversified leadership, planning, communication, organization, and people motivation skills (or equivalent experience).<br><br>Critical Skills<br><br>12+ years of experience in a technology role; proven experience in a leadership role, preferably in a large, complex organization.8+ years Data Engineering, Emerging Technology, and Platform Design experience4+ years Leading large data / technical teams – Data Engineering, Solution Architects, and Business Intelligence Engineers, encouraging a culture of innovation, collaboration, and continuous improvement.Hands-on experience building and delivering Enterprise Data SolutionsExtensive market knowledge and experience with cutting edge Data, Analytics, Data Science, ML and AI technologiesExtensive professional experience with ETL, BI & Data AnalyticsExtensive professional experience with Big Data systems, data pipelines and data processingDeep expertise in Data Archit...</code> |
669
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
670
+ ```json
671
+ {
672
+ "scale": 20.0,
673
+ "similarity_fct": "cos_sim",
674
+ "gather_across_devices": false
675
+ }
676
+ ```
677
+
678
+ ### Evaluation Dataset
679
+
680
+ #### ai-job-embedding-finetuning
681
+
682
+ * Dataset: [ai-job-embedding-finetuning](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning) at [1de228a](https://huggingface.co/datasets/ShushantLLM/ai-job-embedding-finetuning/tree/1de228a8cb18a24605027066b73f54957a2b9ce0)
683
+ * Size: 101 evaluation samples
684
+ * Columns: <code>query</code>, <code>job_description_pos</code>, and <code>job_description_neg</code>
685
+ * Approximate statistics based on the first 101 samples:
686
+ | | query | job_description_pos | job_description_neg |
687
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
688
+ | type | string | string | string |
689
+ | details | <ul><li>min: 10 tokens</li><li>mean: 17.83 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 122.03 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 119.95 tokens</li><li>max: 128 tokens</li></ul> |
690
+ * Samples:
691
+ | query | job_description_pos | job_description_neg |
692
+ |:---------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
693
+ | <code>Azure Data Factory, Databricks, Snowflake architecture</code> | <code>Skills: SQL, PySpark, Databricks, Azure Synapse, Azure Data Factory.<br>Need hands-on coding<br>Requirements:1. Extensive knowledge of any of the big cloud services - Azure, AWS or GCP with practical implementation (like S3, ADLS, Airflow, ADF, Lamda, BigQuery, EC2, Fabric, Databricks or equivalent)2. Strong Hands-on experience in SQL and Python/PySpark programming knowledge. Should be able to write code during an interview with minimal syntax error.3. Strong foundational and architectural knowledge of any of the data warehouses - Snowflake, Redshift. Synapse etc.4. Should be able to drive and deliver projects with little or no guidance. Take ownership, become a self-learner, and have leadership qualities.</code> | <code>experience for yourself, and a better working world for all.<br><br>Data Analyst, Technology Consulting - Data & Analytics (Data Governance & Controls) - Financial Services Office (Manager) (Multiple Positions), Ernst & Young U.S. LLP, New York, NY. <br><br>Work with clients to transform the way they use and manage data by architecting data strategies, providing end-to-end solutions that focus on improving their data supply chain, reengineering processes, enhancing risk control, and enabling information intelligence by harnessing latest advanced technologies. Solve complex issues and drive growth across financial services. Define data and analytic strategies by performing assessments, recommending remediation strategies/solutions based on aggregated view of identified gaps, and designing/implementing future state data and analytics solutions. Manage and coach diverse teams of professionals with different backgrounds. Manage cross functional teams, to ensure project task and timeline accountability...</code> |
694
+ | <code>Big Data Engineer, Spark, Hadoop, AWS GCP</code> | <code>Skills • Expertise and hands-on experience on Spark, and Hadoop echo system components – Must Have • Good and hand-on experience* of any of the Cloud (AWS/GCP) – Must Have • Good knowledge of HiveQL & SparkQL – Must Have Good knowledge of Shell script & Java/Scala/python – Good to Have • Good knowledge of SQL – Good to Have • Good knowledge of migration projects on Hadoop – Good to Have • Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have Good knowledge of Agile Development– Good to Have • Passionate about exploring new technologies – Good to Have • Automation approach – Good to Have <br>Thanks & RegardsShahrukh KhanEmail: shahrukh@zentekinfosoft.com</code> | <code>Requirements: We're looking for a candidate with exceptional proficiency in Google Sheets. This expertise should include manipulating, analyzing, and managing data within Google Sheets. The candidate should be outstanding at extracting business logic from existing reports and implementing it into new ones. Although a basic understanding of SQL for tasks related to data validation and metrics calculations is beneficial, the primary skill we are seeking is proficiency in Google Sheets. This role will involve working across various cross-functional teams, so strong communication skills are essential. The position requires a meticulous eye for detail, a commitment to delivering high-quality results, and above all, exceptional competency in Google Sheets<br><br>Google sheet knowledge is preferred.Strong Excel experience without Google will be considered.Data Validation and formulas to extract data are a mustBasic SQL knowledge is required.Strong communications skills are requiredInterview process...</code> |
695
+ | <code>Energy policy analysis, regulatory impact modeling, distributed energy resource management.</code> | <code>skills, modeling, energy data analysis, and critical thinking are required for a successful candidate. Knowledge of energy systems and distributed solar is required.<br><br>Reporting to the Senior Manager of Government Affairs, you will work across different teams to model data to inform policy advocacy. The ability to obtain data from multiple sources, including regulatory or legislative hearings, academic articles, and reports, are fundamental to the role.<br><br>A willingness to perform under deadlines and collaborate within an organization is required. Honesty, accountability, and integrity are a must.<br><br>Energy Policy & Data Analyst Responsibilities<br><br>Support Government Affairs team members with energy policy recommendations based on data modelingEvaluate relevant regulatory or legislative filings and model the impacts to Sunnova’s customers and businessAnalyze program proposals (grid services, incentives, net energy metering, fixed charges) and develop recommendations that align with Sunnova’s ...</code> | <code>QualificationsData Engineering, Data Modeling, and ETL (Extract Transform Load) skillsMonitor and support data pipelines and ETL workflowsData Warehousing and Data Analytics skillsExperience with Azure cloud services and toolsStrong problem-solving and analytical skillsProficiency in SQL and other programming languagesExperience with data integration and data migrationExcellent communication and collaboration skillsBachelor's degree in Computer Science, Engineering, or related field<br>Enterprise Required SkillsPython, Big data, Data warehouse, ETL, Development, azure, Azure Data Factory, Azure Databricks, Azure SQL Server, Snowflake, data pipelines<br>Top Skills Details1. 3+ years with ETL Development with Azure stack (Azure Data Factory, Azure Databricks, Azure Blob, Azure SQL). 2. 3+ years with Spark, SQL, and Python. This will show up with working with large sets of data in an enterprise environment. 3. Looking for Proactive individuals who have completed projects from start to complet...</code> |
696
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
697
+ ```json
698
+ {
699
+ "scale": 20.0,
700
+ "similarity_fct": "cos_sim",
701
+ "gather_across_devices": false
702
+ }
703
+ ```
704
+
705
+ ### Training Hyperparameters
706
+ #### Non-Default Hyperparameters
707
+
708
+ - `eval_strategy`: steps
709
+ - `per_device_train_batch_size`: 16
710
+ - `per_device_eval_batch_size`: 16
711
+ - `learning_rate`: 2e-05
712
+ - `num_train_epochs`: 5
713
+ - `warmup_ratio`: 0.1
714
+ - `batch_sampler`: no_duplicates
715
+
716
+ #### All Hyperparameters
717
+ <details><summary>Click to expand</summary>
718
+
719
+ - `overwrite_output_dir`: False
720
+ - `do_predict`: False
721
+ - `eval_strategy`: steps
722
+ - `prediction_loss_only`: True
723
+ - `per_device_train_batch_size`: 16
724
+ - `per_device_eval_batch_size`: 16
725
+ - `per_gpu_train_batch_size`: None
726
+ - `per_gpu_eval_batch_size`: None
727
+ - `gradient_accumulation_steps`: 1
728
+ - `eval_accumulation_steps`: None
729
+ - `torch_empty_cache_steps`: None
730
+ - `learning_rate`: 2e-05
731
+ - `weight_decay`: 0.0
732
+ - `adam_beta1`: 0.9
733
+ - `adam_beta2`: 0.999
734
+ - `adam_epsilon`: 1e-08
735
+ - `max_grad_norm`: 1.0
736
+ - `num_train_epochs`: 5
737
+ - `max_steps`: -1
738
+ - `lr_scheduler_type`: linear
739
+ - `lr_scheduler_kwargs`: {}
740
+ - `warmup_ratio`: 0.1
741
+ - `warmup_steps`: 0
742
+ - `log_level`: passive
743
+ - `log_level_replica`: warning
744
+ - `log_on_each_node`: True
745
+ - `logging_nan_inf_filter`: True
746
+ - `save_safetensors`: True
747
+ - `save_on_each_node`: False
748
+ - `save_only_model`: False
749
+ - `restore_callback_states_from_checkpoint`: False
750
+ - `no_cuda`: False
751
+ - `use_cpu`: False
752
+ - `use_mps_device`: False
753
+ - `seed`: 42
754
+ - `data_seed`: None
755
+ - `jit_mode_eval`: False
756
+ - `bf16`: False
757
+ - `fp16`: False
758
+ - `fp16_opt_level`: O1
759
+ - `half_precision_backend`: auto
760
+ - `bf16_full_eval`: False
761
+ - `fp16_full_eval`: False
762
+ - `tf32`: None
763
+ - `local_rank`: 0
764
+ - `ddp_backend`: None
765
+ - `tpu_num_cores`: None
766
+ - `tpu_metrics_debug`: False
767
+ - `debug`: []
768
+ - `dataloader_drop_last`: False
769
+ - `dataloader_num_workers`: 0
770
+ - `dataloader_prefetch_factor`: None
771
+ - `past_index`: -1
772
+ - `disable_tqdm`: False
773
+ - `remove_unused_columns`: True
774
+ - `label_names`: None
775
+ - `load_best_model_at_end`: False
776
+ - `ignore_data_skip`: False
777
+ - `fsdp`: []
778
+ - `fsdp_min_num_params`: 0
779
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
780
+ - `fsdp_transformer_layer_cls_to_wrap`: None
781
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
782
+ - `parallelism_config`: None
783
+ - `deepspeed`: None
784
+ - `label_smoothing_factor`: 0.0
785
+ - `optim`: adamw_torch_fused
786
+ - `optim_args`: None
787
+ - `adafactor`: False
788
+ - `group_by_length`: False
789
+ - `length_column_name`: length
790
+ - `project`: huggingface
791
+ - `trackio_space_id`: trackio
792
+ - `ddp_find_unused_parameters`: None
793
+ - `ddp_bucket_cap_mb`: None
794
+ - `ddp_broadcast_buffers`: False
795
+ - `dataloader_pin_memory`: True
796
+ - `dataloader_persistent_workers`: False
797
+ - `skip_memory_metrics`: True
798
+ - `use_legacy_prediction_loop`: False
799
+ - `push_to_hub`: False
800
+ - `resume_from_checkpoint`: None
801
+ - `hub_model_id`: None
802
+ - `hub_strategy`: every_save
803
+ - `hub_private_repo`: None
804
+ - `hub_always_push`: False
805
+ - `hub_revision`: None
806
+ - `gradient_checkpointing`: False
807
+ - `gradient_checkpointing_kwargs`: None
808
+ - `include_inputs_for_metrics`: False
809
+ - `include_for_metrics`: []
810
+ - `eval_do_concat_batches`: True
811
+ - `fp16_backend`: auto
812
+ - `push_to_hub_model_id`: None
813
+ - `push_to_hub_organization`: None
814
+ - `mp_parameters`:
815
+ - `auto_find_batch_size`: False
816
+ - `full_determinism`: False
817
+ - `torchdynamo`: None
818
+ - `ray_scope`: last
819
+ - `ddp_timeout`: 1800
820
+ - `torch_compile`: False
821
+ - `torch_compile_backend`: None
822
+ - `torch_compile_mode`: None
823
+ - `include_tokens_per_second`: False
824
+ - `include_num_input_tokens_seen`: no
825
+ - `neftune_noise_alpha`: None
826
+ - `optim_target_modules`: None
827
+ - `batch_eval_metrics`: False
828
+ - `eval_on_start`: False
829
+ - `use_liger_kernel`: False
830
+ - `liger_kernel_config`: None
831
+ - `eval_use_gather_object`: False
832
+ - `average_tokens_across_devices`: True
833
+ - `prompts`: None
834
+ - `batch_sampler`: no_duplicates
835
+ - `multi_dataset_batch_sampler`: proportional
836
+ - `router_mapping`: {}
837
+ - `learning_rate_mapping`: {}
838
+
839
+ </details>
840
+
841
+ ### Training Logs
842
+ | Epoch | Step | Training Loss | Validation Loss | ai-job-validation_cosine_accuracy | ai-job-test_cosine_accuracy |
843
+ |:------:|:----:|:-------------:|:---------------:|:---------------------------------:|:---------------------------:|
844
+ | -1 | -1 | - | - | 0.8416 | - |
845
+ | 1.9608 | 100 | 1.2457 | 1.3444 | 0.9802 | - |
846
+ | 3.9216 | 200 | 0.3222 | 1.3620 | 0.9802 | - |
847
+ | -1 | -1 | - | - | 0.9802 | 0.9608 |
848
+
849
+
850
+ ### Framework Versions
851
+ - Python: 3.12.12
852
+ - Sentence Transformers: 5.1.2
853
+ - Transformers: 4.57.1
854
+ - PyTorch: 2.8.0+cu126
855
+ - Accelerate: 1.11.0
856
+ - Datasets: 4.0.0
857
+ - Tokenizers: 0.22.1
858
+
859
+ ## Citation
860
+
861
+ ### BibTeX
862
+
863
+ #### Sentence Transformers
864
+ ```bibtex
865
+ @inproceedings{reimers-2019-sentence-bert,
866
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
867
+ author = "Reimers, Nils and Gurevych, Iryna",
868
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
869
+ month = "11",
870
+ year = "2019",
871
+ publisher = "Association for Computational Linguistics",
872
+ url = "https://arxiv.org/abs/1908.10084",
873
+ }
874
+ ```
875
+
876
+ #### MultipleNegativesRankingLoss
877
+ ```bibtex
878
+ @misc{henderson2017efficient,
879
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
880
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
881
+ year={2017},
882
+ eprint={1705.00652},
883
+ archivePrefix={arXiv},
884
+ primaryClass={cs.CL}
885
+ }
886
+ ```
887
+
888
+ <!--
889
+ ## Glossary
890
+
891
+ *Clearly define terms in order to be accessible across audiences.*
892
+ -->
893
+
894
+ <!--
895
+ ## Model Card Authors
896
+
897
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
898
+ -->
899
+
900
+ <!--
901
+ ## Model Card Contact
902
+
903
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
904
+ -->
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "xlm-roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "output_past": true,
22
+ "pad_token_id": 1,
23
+ "position_embedding_type": "absolute",
24
+ "transformers_version": "4.57.1",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 250002
28
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "5.1.2",
4
+ "transformers": "4.57.1",
5
+ "pytorch": "2.8.0+cu126"
6
+ },
7
+ "model_type": "SentenceTransformer",
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fed899fdbf12643054699a42289053249f9deab8f8e95c19cfe38fbd650b2135
3
+ size 1112197096
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cad551d5600a84242d0973327029452a1e3672ba6313c2a3c3d69c4310e12719
3
+ size 17082987
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "max_length": 128,
51
+ "model_max_length": 128,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "<pad>",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "</s>",
57
+ "stride": 0,
58
+ "tokenizer_class": "XLMRobertaTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "<unk>"
62
+ }