sachindatasociety commited on
Commit
dc58844
·
verified ·
1 Parent(s): 570bf9b

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,469 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:48
11
+ - loss:MultipleNegativesRankingLoss
12
+ widget:
13
+ - source_sentence: 'Fundamentals of Deep Learning for Multi GPUs. Find out how to
14
+ use multiple GPUs to train neural networks and effectively parallelize\ntraining
15
+ of deep neural networks using TensorFlow.. tags: multiple GPUs, neural networks,
16
+ TensorFlow, parallelize. Languages: Course language: Python. Prerequisites: No
17
+ prerequisite course required. Target audience: Professionals want to train deep
18
+ neural networks on multi-GPU technology to shorten\nthe training time required
19
+ for data-intensive applications.'
20
+ sentences:
21
+ - 'Course Name:Hypothesis Testing in Python|Course Description:In this course, learners
22
+ with foundational knowledge of statistical concepts will dive deeper into hypothesis
23
+ testing by focusing on three standard tests of statistical significance: t-tests,
24
+ F-tests, and chi-squared tests. Covering topics such as t-value, t-distribution,
25
+ chi-square distribution, F-statistic, and F-distribution, this course will familiarize
26
+ learners with techniques that will enable them to assess normality of data and
27
+ goodness-of-fit and to compare observed and expected frequencies objectively.|Tags:f-distribution,
28
+ chi-square distribution, f-statistic, t-distribution, t-value|Course language:
29
+ Python|Target Audience:Professionals some Python experience who would like to
30
+ expand their skill set to more advanced Python visualization techniques and tools.|Prerequisite
31
+ course required: Foundations of Statistics in Python'
32
+ - 'Course Name:Foundations of Data & AI Literacy for Managers|Course Description:Designed
33
+ for managers leading teams and projects, this course empowers individuals to build
34
+ data-driven organizations and integrate AI tools into daily operations. Learners
35
+ will gain a foundational understanding of data and AI concepts and learn how to
36
+ leverage them for actionable business insights. Managers will develop the skills
37
+ to increase collaboration with technical experts and make informed decisions about
38
+ analysis methods, ensuring their enterprise thrives in today’s data-driven landscape.|Tags:Designed,
39
+ managers, leading, teams, projects,, course, empowers, individuals, build, data-driven,
40
+ organizations, integrate, AI, tools, into, daily, operations., Learners, will,
41
+ gain, foundational, understanding, data, AI, concepts, learn, how, leverage, them,
42
+ actionable, business, insights., Managers, will, develop, skills, increase, collaboration,
43
+ technical, experts, make, informed, decisions, about, analysis, methods,, ensuring,
44
+ their, enterprise, thrives, today’s, data-driven, landscape.|Course language:
45
+ None|Target Audience:No target audience|No prerequisite course required'
46
+ - 'Course Name:Fundamentals of Deep Learning for Multi GPUs|Course Description:Find
47
+ out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining
48
+ of deep neural networks using TensorFlow.|Tags:multiple GPUs, neural networks,
49
+ TensorFlow, parallelize|Course language: Python|Target Audience:Professionals
50
+ want to train deep neural networks on multi-GPU technology to shorten\nthe training
51
+ time required for data-intensive applications|No prerequisite course required'
52
+ - source_sentence: 'Data Visualization Design & Storytelling. This course focuses
53
+ on the fundamentals of data visualization, which helps support data-driven decision-making
54
+ and to create a data-driven culture.. tags: data driven culture, data analytics,
55
+ data literacy, data quality, storytelling, data science. Languages: Course language:
56
+ TBD. Prerequisites: No prerequisite course required. Target audience: Professionals
57
+ who would like to understand more about how to visualize data, design and concepts
58
+ of storytelling through data..'
59
+ sentences:
60
+ - 'Course Name:Building Transformer-Based NLP Applications (NVIDIA)|Course Description:Learn
61
+ how to apply and fine-tune a Transformer-based Deep Learning model to Natural
62
+ Language Processing (NLP) tasks. In this course, you''ll construct a Transformer
63
+ neural network in PyTorch, Build a named-entity recognition (NER) application
64
+ with BERT, Deploy the NER application with ONNX and TensorRT to a Triton inference
65
+ server. Upon completion, you’ll be proficient i.n task-agnostic applications of
66
+ Transformer-based models. Data Society''s instructors are certified by NVIDIA’s
67
+ Deep Learning Institute to teach this course.|Tags:named-entity recognition, text,
68
+ Natural language processing, classification, NLP, NER|Course language: Python|Target
69
+ Audience:Professionals with basic knowledge of neural networks and want to expand
70
+ their knowledge in the world of Natural langauge processing|No prerequisite course
71
+ required'
72
+ - 'Course Name:Nonlinear Regression in Python|Course Description:In this course,
73
+ learners will practice implementing a variety of nonlinear regression techniques
74
+ in Python to model complex relationships beyond simple linear patterns. They will
75
+ learn to interpret key transformations, including logarithmic (log-log, log-linear)
76
+ and polynomial models, and identify interaction effects between predictor variables.
77
+ Through hands-on exercises, they will also develop practical skills in selecting,
78
+ fitting, and validating the most appropriate nonlinear model for their data.|Tags:nonlinear,
79
+ regression|Course language: Python|Target Audience:This is an intermediate level
80
+ course for data scientists who want to learn to understand and estimate relationships
81
+ between a set of independent variables and a continuous dependent variable.|Prerequisite
82
+ course required: Multiple Linear Regression'
83
+ - 'Course Name:Data Visualization Design & Storytelling|Course Description:This
84
+ course focuses on the fundamentals of data visualization, which helps support
85
+ data-driven decision-making and to create a data-driven culture.|Tags:data driven
86
+ culture, data analytics, data literacy, data quality, storytelling, data science|Course
87
+ language: TBD|Target Audience:Professionals who would like to understand more
88
+ about how to visualize data, design and concepts of storytelling through data.|No
89
+ prerequisite course required'
90
+ - source_sentence: 'Foundations of Probability Theory in Python. This course guides
91
+ learners through a comprehensive review of advanced statistics topics on probability,
92
+ such as permutations and combinations, joint probability, conditional probability,
93
+ and marginal probability. Learners will also become familiar with Bayes’ theorem,
94
+ a rule that provides a way to calculate the probability of a cause given its outcome.
95
+ By the end of this course, learners will also be able to assess the likelihood
96
+ of events being independent to indicate whether further statistical analysis is
97
+ likely to yield results.. tags: conditional probability, bayes'' theorem. Languages:
98
+ Course language: Python. Prerequisites: Prerequisite course required: Hypothesis
99
+ Testing in Python. Target audience: Professionals some Python experience who would
100
+ like to expand their skill set to more advanced Python visualization techniques
101
+ and tools..'
102
+ sentences:
103
+ - 'Course Name:Foundations of Probability Theory in Python|Course Description:This
104
+ course guides learners through a comprehensive review of advanced statistics topics
105
+ on probability, such as permutations and combinations, joint probability, conditional
106
+ probability, and marginal probability. Learners will also become familiar with
107
+ Bayes’ theorem, a rule that provides a way to calculate the probability of a cause
108
+ given its outcome. By the end of this course, learners will also be able to assess
109
+ the likelihood of events being independent to indicate whether further statistical
110
+ analysis is likely to yield results.|Tags:conditional probability, bayes'' theorem|Course
111
+ language: Python|Target Audience:Professionals some Python experience who would
112
+ like to expand their skill set to more advanced Python visualization techniques
113
+ and tools.|Prerequisite course required: Hypothesis Testing in Python'
114
+ - 'Course Name:Foundations of Generative AI|Course Description:Foundations of Generative
115
+ AI|Tags:Foundations, Generative, AI|Course language: None|Target Audience:No target
116
+ audience|No prerequisite course required'
117
+ - 'Course Name:Data Science for Managers|Course Description:This course is designed
118
+ for managers seeking to bolster their data literacy with a deep dive into data
119
+ science tools and teams, project life cycles, and methods.|Tags:data driven culture,
120
+ data analytics, data quality, storytelling, data science|Course language: TBD|Target
121
+ Audience:This course is targeted for those who would like to understand more about
122
+ data literacy, make more informed decisions and identify data-driven solutions
123
+ through data science tools and methods.|No prerequisite course required'
124
+ ---
125
+
126
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
127
+
128
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
129
+
130
+ ## Model Details
131
+
132
+ ### Model Description
133
+ - **Model Type:** Sentence Transformer
134
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
135
+ - **Maximum Sequence Length:** 512 tokens
136
+ - **Output Dimensionality:** 768 tokens
137
+ - **Similarity Function:** Cosine Similarity
138
+ <!-- - **Training Dataset:** Unknown -->
139
+ <!-- - **Language:** Unknown -->
140
+ <!-- - **License:** Unknown -->
141
+
142
+ ### Model Sources
143
+
144
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
145
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
146
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
147
+
148
+ ### Full Model Architecture
149
+
150
+ ```
151
+ SentenceTransformer(
152
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
153
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
154
+ (2): Normalize()
155
+ )
156
+ ```
157
+
158
+ ## Usage
159
+
160
+ ### Direct Usage (Sentence Transformers)
161
+
162
+ First install the Sentence Transformers library:
163
+
164
+ ```bash
165
+ pip install -U sentence-transformers
166
+ ```
167
+
168
+ Then you can load this model and run inference.
169
+ ```python
170
+ from sentence_transformers import SentenceTransformer
171
+
172
+ # Download from the 🤗 Hub
173
+ model = SentenceTransformer("datasocietyco/bge-base-en-v1.5-course-recommender-v4python")
174
+ # Run inference
175
+ sentences = [
176
+ "Foundations of Probability Theory in Python. This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.. tags: conditional probability, bayes' theorem. Languages: Course language: Python. Prerequisites: Prerequisite course required: Hypothesis Testing in Python. Target audience: Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools..",
177
+ "Course Name:Foundations of Probability Theory in Python|Course Description:This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.|Tags:conditional probability, bayes' theorem|Course language: Python|Target Audience:Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.|Prerequisite course required: Hypothesis Testing in Python",
178
+ 'Course Name:Foundations of Generative AI|Course Description:Foundations of Generative AI|Tags:Foundations, Generative, AI|Course language: None|Target Audience:No target audience|No prerequisite course required',
179
+ ]
180
+ embeddings = model.encode(sentences)
181
+ print(embeddings.shape)
182
+ # [3, 768]
183
+
184
+ # Get the similarity scores for the embeddings
185
+ similarities = model.similarity(embeddings, embeddings)
186
+ print(similarities.shape)
187
+ # [3, 3]
188
+ ```
189
+
190
+ <!--
191
+ ### Direct Usage (Transformers)
192
+
193
+ <details><summary>Click to see the direct usage in Transformers</summary>
194
+
195
+ </details>
196
+ -->
197
+
198
+ <!--
199
+ ### Downstream Usage (Sentence Transformers)
200
+
201
+ You can finetune this model on your own dataset.
202
+
203
+ <details><summary>Click to expand</summary>
204
+
205
+ </details>
206
+ -->
207
+
208
+ <!--
209
+ ### Out-of-Scope Use
210
+
211
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
212
+ -->
213
+
214
+ <!--
215
+ ## Bias, Risks and Limitations
216
+
217
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
218
+ -->
219
+
220
+ <!--
221
+ ### Recommendations
222
+
223
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
224
+ -->
225
+
226
+ ## Training Details
227
+
228
+ ### Training Dataset
229
+
230
+ #### Unnamed Dataset
231
+
232
+
233
+ * Size: 48 training samples
234
+ * Columns: <code>anchor</code> and <code>positive</code>
235
+ * Approximate statistics based on the first 48 samples:
236
+ | | anchor | positive |
237
+ |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
238
+ | type | string | string |
239
+ | details | <ul><li>min: 49 tokens</li><li>mean: 188.12 tokens</li><li>max: 322 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 186.12 tokens</li><li>max: 320 tokens</li></ul> |
240
+ * Samples:
241
+ | anchor | positive |
242
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
243
+ | <code>Outlier Detection with DBSCAN in Python. Density-Based Spatial Clustering of Applications with Noise, or DBSCAN, contrasts groups of densely-packed data with points isolated in low-density regions. In this course, learners will discuss the optimal data conditions suited to this method of outlier detection. After discussing different basic varieties of anomaly detection, learners will implement DBSCAN to identify likely outliers. They will also use a balancing method called Synthetic Minority Oversampling Technique, or SMOTE, to generate additional examples of outliers and improve the anomaly detection model.. tags: outlier, SMOTE, anomaly, DBSCAN. Languages: Course language: Python. Prerequisites: Prerequisite course required: Intro to Clustering. Target audience: Professionals with some Python experience who would like to expand their skills to learn about various outlier detection techniques.</code> | <code>Course Name:Outlier Detection with DBSCAN in Python|Course Description:Density-Based Spatial Clustering of Applications with Noise, or DBSCAN, contrasts groups of densely-packed data with points isolated in low-density regions. In this course, learners will discuss the optimal data conditions suited to this method of outlier detection. After discussing different basic varieties of anomaly detection, learners will implement DBSCAN to identify likely outliers. They will also use a balancing method called Synthetic Minority Oversampling Technique, or SMOTE, to generate additional examples of outliers and improve the anomaly detection model.|Tags:outlier, SMOTE, anomaly, DBSCAN|Course language: Python|Target Audience:Professionals with some Python experience who would like to expand their skills to learn about various outlier detection techniques|Prerequisite course required: Intro to Clustering</code> |
244
+ | <code>Foundations of Python. This course introduces learners to the fundamentals of the Python programming language. Python is one of the most widely used computer languages in the world, helpful for building web-based applications, performing data analysis, and automating tasks. By the end of this course, learners will identify how data scientists use Python, distinguish among basic data types and data structures, and perform simple arithmetic and variable-related tasks.. tags: functions, basics, data-structures, control-flow. Languages: Course language: Python. Prerequisites: Prerequisite course required: Version Control with Git. Target audience: This is an introductory level course for data scientists who want to learn basics of Python and implement different data manipulation techniques using popular data wrangling Python libraries..</code> | <code>Course Name:Foundations of Python|Course Description:This course introduces learners to the fundamentals of the Python programming language. Python is one of the most widely used computer languages in the world, helpful for building web-based applications, performing data analysis, and automating tasks. By the end of this course, learners will identify how data scientists use Python, distinguish among basic data types and data structures, and perform simple arithmetic and variable-related tasks.|Tags:functions, basics, data-structures, control-flow|Course language: Python|Target Audience:This is an introductory level course for data scientists who want to learn basics of Python and implement different data manipulation techniques using popular data wrangling Python libraries.|Prerequisite course required: Version Control with Git</code> |
245
+ | <code>Text Generation with LLMs in Python. This course provides a practical introduction to the latest advancements in generative AI with a focus on text. To start, the course explores the use of reinforcement learning in natural language processing (NLP). Learners will delve into approaches for conversational and question-answering (QA) tasks, highlighting the capabilities, limitations, and use cases of models available in the Hugging Face library, such as Dolly v2. Finally, learners will gain hands-on experience in creating their own chatbot by using the concepts of Retrieval Augmented Generation (RAG) in LlamaIndex.. tags: course, provides, practical, introduction, latest, advancements, generative, AI, focus, text., start,, course, explores, use, reinforcement, learning, natural, language, processing, (NLP)., Learners, will, delve, into, approaches, conversational, question-answering, (QA), tasks,, highlighting, capabilities,, limitations,, use, cases, models, available, Hugging, Face, library,, such, as, Dolly, v2., Finally,, learners, will, gain, hands-on, experience, creating, their, own, chatbot, using, concepts, Retrieval, Augmented, Generation, (RAG), LlamaIndex.. Languages: Course language: None. Prerequisites: No prerequisite course required. Target audience: No target audience.</code> | <code>Course Name:Text Generation with LLMs in Python|Course Description:This course provides a practical introduction to the latest advancements in generative AI with a focus on text. To start, the course explores the use of reinforcement learning in natural language processing (NLP). Learners will delve into approaches for conversational and question-answering (QA) tasks, highlighting the capabilities, limitations, and use cases of models available in the Hugging Face library, such as Dolly v2. Finally, learners will gain hands-on experience in creating their own chatbot by using the concepts of Retrieval Augmented Generation (RAG) in LlamaIndex.|Tags:course, provides, practical, introduction, latest, advancements, generative, AI, focus, text., start,, course, explores, use, reinforcement, learning, natural, language, processing, (NLP)., Learners, will, delve, into, approaches, conversational, question-answering, (QA), tasks,, highlighting, capabilities,, limitations,, use, cases, models, available, Hugging, Face, library,, such, as, Dolly, v2., Finally,, learners, will, gain, hands-on, experience, creating, their, own, chatbot, using, concepts, Retrieval, Augmented, Generation, (RAG), LlamaIndex.|Course language: None|Target Audience:No target audience|No prerequisite course required</code> |
246
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
247
+ ```json
248
+ {
249
+ "scale": 20.0,
250
+ "similarity_fct": "cos_sim"
251
+ }
252
+ ```
253
+
254
+ ### Evaluation Dataset
255
+
256
+ #### Unnamed Dataset
257
+
258
+
259
+ * Size: 12 evaluation samples
260
+ * Columns: <code>anchor</code> and <code>positive</code>
261
+ * Approximate statistics based on the first 12 samples:
262
+ | | anchor | positive |
263
+ |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
264
+ | type | string | string |
265
+ | details | <ul><li>min: 46 tokens</li><li>mean: 162.92 tokens</li><li>max: 363 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 160.92 tokens</li><li>max: 361 tokens</li></ul> |
266
+ * Samples:
267
+ | anchor | positive |
268
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
269
+ | <code>Fundamentals of Deep Learning for Multi GPUs. Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.. tags: multiple GPUs, neural networks, TensorFlow, parallelize. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals want to train deep neural networks on multi-GPU technology to shorten\nthe training time required for data-intensive applications.</code> | <code>Course Name:Fundamentals of Deep Learning for Multi GPUs|Course Description:Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.|Tags:multiple GPUs, neural networks, TensorFlow, parallelize|Course language: Python|Target Audience:Professionals want to train deep neural networks on multi-GPU technology to shorten\nthe training time required for data-intensive applications|No prerequisite course required</code> |
270
+ | <code>Building Transformer-Based NLP Applications (NVIDIA). Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks. In this course, you'll construct a Transformer neural network in PyTorch, Build a named-entity recognition (NER) application with BERT, Deploy the NER application with ONNX and TensorRT to a Triton inference server. Upon completion, you’ll be proficient i.n task-agnostic applications of Transformer-based models. Data Society's instructors are certified by NVIDIA’s Deep Learning Institute to teach this course.. tags: named-entity recognition, text, Natural language processing, classification, NLP, NER. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals with basic knowledge of neural networks and want to expand their knowledge in the world of Natural langauge processing.</code> | <code>Course Name:Building Transformer-Based NLP Applications (NVIDIA)|Course Description:Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks. In this course, you'll construct a Transformer neural network in PyTorch, Build a named-entity recognition (NER) application with BERT, Deploy the NER application with ONNX and TensorRT to a Triton inference server. Upon completion, you’ll be proficient i.n task-agnostic applications of Transformer-based models. Data Society's instructors are certified by NVIDIA’s Deep Learning Institute to teach this course.|Tags:named-entity recognition, text, Natural language processing, classification, NLP, NER|Course language: Python|Target Audience:Professionals with basic knowledge of neural networks and want to expand their knowledge in the world of Natural langauge processing|No prerequisite course required</code> |
271
+ | <code>Nonlinear Regression in Python. In this course, learners will practice implementing a variety of nonlinear regression techniques in Python to model complex relationships beyond simple linear patterns. They will learn to interpret key transformations, including logarithmic (log-log, log-linear) and polynomial models, and identify interaction effects between predictor variables. Through hands-on exercises, they will also develop practical skills in selecting, fitting, and validating the most appropriate nonlinear model for their data.. tags: nonlinear, regression. Languages: Course language: Python. Prerequisites: Prerequisite course required: Multiple Linear Regression. Target audience: This is an intermediate level course for data scientists who want to learn to understand and estimate relationships between a set of independent variables and a continuous dependent variable..</code> | <code>Course Name:Nonlinear Regression in Python|Course Description:In this course, learners will practice implementing a variety of nonlinear regression techniques in Python to model complex relationships beyond simple linear patterns. They will learn to interpret key transformations, including logarithmic (log-log, log-linear) and polynomial models, and identify interaction effects between predictor variables. Through hands-on exercises, they will also develop practical skills in selecting, fitting, and validating the most appropriate nonlinear model for their data.|Tags:nonlinear, regression|Course language: Python|Target Audience:This is an intermediate level course for data scientists who want to learn to understand and estimate relationships between a set of independent variables and a continuous dependent variable.|Prerequisite course required: Multiple Linear Regression</code> |
272
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
273
+ ```json
274
+ {
275
+ "scale": 20.0,
276
+ "similarity_fct": "cos_sim"
277
+ }
278
+ ```
279
+
280
+ ### Training Hyperparameters
281
+ #### Non-Default Hyperparameters
282
+
283
+ - `eval_strategy`: steps
284
+ - `per_device_train_batch_size`: 16
285
+ - `per_device_eval_batch_size`: 16
286
+ - `learning_rate`: 3e-06
287
+ - `max_steps`: 24
288
+ - `warmup_ratio`: 0.1
289
+ - `batch_sampler`: no_duplicates
290
+
291
+ #### All Hyperparameters
292
+ <details><summary>Click to expand</summary>
293
+
294
+ - `overwrite_output_dir`: False
295
+ - `do_predict`: False
296
+ - `eval_strategy`: steps
297
+ - `prediction_loss_only`: True
298
+ - `per_device_train_batch_size`: 16
299
+ - `per_device_eval_batch_size`: 16
300
+ - `per_gpu_train_batch_size`: None
301
+ - `per_gpu_eval_batch_size`: None
302
+ - `gradient_accumulation_steps`: 1
303
+ - `eval_accumulation_steps`: None
304
+ - `torch_empty_cache_steps`: None
305
+ - `learning_rate`: 3e-06
306
+ - `weight_decay`: 0.0
307
+ - `adam_beta1`: 0.9
308
+ - `adam_beta2`: 0.999
309
+ - `adam_epsilon`: 1e-08
310
+ - `max_grad_norm`: 1.0
311
+ - `num_train_epochs`: 3.0
312
+ - `max_steps`: 24
313
+ - `lr_scheduler_type`: linear
314
+ - `lr_scheduler_kwargs`: {}
315
+ - `warmup_ratio`: 0.1
316
+ - `warmup_steps`: 0
317
+ - `log_level`: passive
318
+ - `log_level_replica`: warning
319
+ - `log_on_each_node`: True
320
+ - `logging_nan_inf_filter`: True
321
+ - `save_safetensors`: True
322
+ - `save_on_each_node`: False
323
+ - `save_only_model`: False
324
+ - `restore_callback_states_from_checkpoint`: False
325
+ - `no_cuda`: False
326
+ - `use_cpu`: False
327
+ - `use_mps_device`: False
328
+ - `seed`: 42
329
+ - `data_seed`: None
330
+ - `jit_mode_eval`: False
331
+ - `use_ipex`: False
332
+ - `bf16`: False
333
+ - `fp16`: False
334
+ - `fp16_opt_level`: O1
335
+ - `half_precision_backend`: auto
336
+ - `bf16_full_eval`: False
337
+ - `fp16_full_eval`: False
338
+ - `tf32`: None
339
+ - `local_rank`: 0
340
+ - `ddp_backend`: None
341
+ - `tpu_num_cores`: None
342
+ - `tpu_metrics_debug`: False
343
+ - `debug`: []
344
+ - `dataloader_drop_last`: False
345
+ - `dataloader_num_workers`: 0
346
+ - `dataloader_prefetch_factor`: None
347
+ - `past_index`: -1
348
+ - `disable_tqdm`: False
349
+ - `remove_unused_columns`: True
350
+ - `label_names`: None
351
+ - `load_best_model_at_end`: False
352
+ - `ignore_data_skip`: False
353
+ - `fsdp`: []
354
+ - `fsdp_min_num_params`: 0
355
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
356
+ - `fsdp_transformer_layer_cls_to_wrap`: None
357
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
358
+ - `deepspeed`: None
359
+ - `label_smoothing_factor`: 0.0
360
+ - `optim`: adamw_torch
361
+ - `optim_args`: None
362
+ - `adafactor`: False
363
+ - `group_by_length`: False
364
+ - `length_column_name`: length
365
+ - `ddp_find_unused_parameters`: None
366
+ - `ddp_bucket_cap_mb`: None
367
+ - `ddp_broadcast_buffers`: False
368
+ - `dataloader_pin_memory`: True
369
+ - `dataloader_persistent_workers`: False
370
+ - `skip_memory_metrics`: True
371
+ - `use_legacy_prediction_loop`: False
372
+ - `push_to_hub`: False
373
+ - `resume_from_checkpoint`: None
374
+ - `hub_model_id`: None
375
+ - `hub_strategy`: every_save
376
+ - `hub_private_repo`: False
377
+ - `hub_always_push`: False
378
+ - `gradient_checkpointing`: False
379
+ - `gradient_checkpointing_kwargs`: None
380
+ - `include_inputs_for_metrics`: False
381
+ - `eval_do_concat_batches`: True
382
+ - `fp16_backend`: auto
383
+ - `push_to_hub_model_id`: None
384
+ - `push_to_hub_organization`: None
385
+ - `mp_parameters`:
386
+ - `auto_find_batch_size`: False
387
+ - `full_determinism`: False
388
+ - `torchdynamo`: None
389
+ - `ray_scope`: last
390
+ - `ddp_timeout`: 1800
391
+ - `torch_compile`: False
392
+ - `torch_compile_backend`: None
393
+ - `torch_compile_mode`: None
394
+ - `dispatch_batches`: None
395
+ - `split_batches`: None
396
+ - `include_tokens_per_second`: False
397
+ - `include_num_input_tokens_seen`: False
398
+ - `neftune_noise_alpha`: None
399
+ - `optim_target_modules`: None
400
+ - `batch_eval_metrics`: False
401
+ - `eval_on_start`: False
402
+ - `use_liger_kernel`: False
403
+ - `eval_use_gather_object`: False
404
+ - `batch_sampler`: no_duplicates
405
+ - `multi_dataset_batch_sampler`: proportional
406
+
407
+ </details>
408
+
409
+ ### Training Logs
410
+ | Epoch | Step | Training Loss | loss |
411
+ |:------:|:----:|:-------------:|:------:|
412
+ | 6.6667 | 20 | 0.046 | 0.0188 |
413
+
414
+
415
+ ### Framework Versions
416
+ - Python: 3.9.13
417
+ - Sentence Transformers: 3.1.1
418
+ - Transformers: 4.45.1
419
+ - PyTorch: 2.2.2
420
+ - Accelerate: 0.34.2
421
+ - Datasets: 3.0.0
422
+ - Tokenizers: 0.20.0
423
+
424
+ ## Citation
425
+
426
+ ### BibTeX
427
+
428
+ #### Sentence Transformers
429
+ ```bibtex
430
+ @inproceedings{reimers-2019-sentence-bert,
431
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
432
+ author = "Reimers, Nils and Gurevych, Iryna",
433
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
434
+ month = "11",
435
+ year = "2019",
436
+ publisher = "Association for Computational Linguistics",
437
+ url = "https://arxiv.org/abs/1908.10084",
438
+ }
439
+ ```
440
+
441
+ #### MultipleNegativesRankingLoss
442
+ ```bibtex
443
+ @misc{henderson2017efficient,
444
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
445
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
446
+ year={2017},
447
+ eprint={1705.00652},
448
+ archivePrefix={arXiv},
449
+ primaryClass={cs.CL}
450
+ }
451
+ ```
452
+
453
+ <!--
454
+ ## Glossary
455
+
456
+ *Clearly define terms in order to be accessible across audiences.*
457
+ -->
458
+
459
+ <!--
460
+ ## Model Card Authors
461
+
462
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
463
+ -->
464
+
465
+ <!--
466
+ ## Model Card Contact
467
+
468
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
469
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.45.1",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.1",
5
+ "pytorch": "2.2.2"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8fa5bb2536e91a74f9232fd81b9865fa703a87d1348d14491ed3173ecd1051b
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff