sachindatasociety's picture
Add new SentenceTransformer model.
dc58844 verified
metadata
base_model: BAAI/bge-base-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:48
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Fundamentals of Deep Learning for Multi GPUs. Find out how to use multiple
      GPUs to train neural networks and effectively parallelize\ntraining of
      deep neural networks using TensorFlow.. tags: multiple GPUs, neural
      networks, TensorFlow, parallelize. Languages: Course language: Python.
      Prerequisites: No prerequisite course required. Target audience:
      Professionals want to train deep neural networks on multi-GPU technology
      to shorten\nthe training time required for data-intensive applications.
    sentences:
      - >-
        Course Name:Hypothesis Testing in Python|Course Description:In this
        course, learners with foundational knowledge of statistical concepts
        will dive deeper into hypothesis testing by focusing on three standard
        tests of statistical significance: t-tests, F-tests, and chi-squared
        tests. Covering topics such as t-value, t-distribution, chi-square
        distribution, F-statistic, and F-distribution, this course will
        familiarize learners with techniques that will enable them to assess
        normality of data and goodness-of-fit and to compare observed and
        expected frequencies objectively.|Tags:f-distribution, chi-square
        distribution, f-statistic, t-distribution, t-value|Course language:
        Python|Target Audience:Professionals some Python experience who would
        like to expand their skill set to more advanced Python visualization
        techniques and tools.|Prerequisite course required: Foundations of
        Statistics in Python
      - >-
        Course Name:Foundations of Data & AI Literacy for Managers|Course
        Description:Designed for managers leading teams and projects, this
        course empowers individuals to build data-driven organizations and
        integrate AI tools into daily operations. Learners will gain a
        foundational understanding of data and AI concepts and learn how to
        leverage them for actionable business insights. Managers will develop
        the skills to increase collaboration with technical experts and make
        informed decisions about analysis methods, ensuring their enterprise
        thrives in today’s data-driven landscape.|Tags:Designed, managers,
        leading, teams, projects,, course, empowers, individuals, build,
        data-driven, organizations, integrate, AI, tools, into, daily,
        operations., Learners, will, gain, foundational, understanding, data,
        AI, concepts, learn, how, leverage, them, actionable, business,
        insights., Managers, will, develop, skills, increase, collaboration,
        technical, experts, make, informed, decisions, about, analysis,
        methods,, ensuring, their, enterprise, thrives, today’s, data-driven,
        landscape.|Course language: None|Target Audience:No target audience|No
        prerequisite course required
      - >-
        Course Name:Fundamentals of Deep Learning for Multi GPUs|Course
        Description:Find out how to use multiple GPUs to train neural networks
        and effectively parallelize\ntraining of deep neural networks using
        TensorFlow.|Tags:multiple GPUs, neural networks, TensorFlow,
        parallelize|Course language: Python|Target Audience:Professionals want
        to train deep neural networks on multi-GPU technology to shorten\nthe
        training time required for data-intensive applications|No prerequisite
        course required
  - source_sentence: >-
      Data Visualization Design & Storytelling. This course focuses on the
      fundamentals of data visualization, which helps support data-driven
      decision-making and to create a data-driven culture.. tags: data driven
      culture, data analytics, data literacy, data quality, storytelling, data
      science. Languages: Course language: TBD. Prerequisites: No prerequisite
      course required. Target audience: Professionals who would like to
      understand more about how to visualize data, design and concepts of
      storytelling through data..
    sentences:
      - >-
        Course Name:Building Transformer-Based NLP Applications (NVIDIA)|Course
        Description:Learn how to apply and fine-tune a Transformer-based Deep
        Learning model to Natural Language Processing (NLP) tasks. In this
        course, you'll construct a Transformer neural network in PyTorch, Build
        a named-entity recognition (NER) application with BERT, Deploy the NER
        application with ONNX and TensorRT to a Triton inference server. Upon
        completion, you’ll be proficient i.n task-agnostic applications of
        Transformer-based models. Data Society's instructors are certified by
        NVIDIA’s Deep Learning Institute to teach this course.|Tags:named-entity
        recognition, text, Natural language processing, classification, NLP,
        NER|Course language: Python|Target Audience:Professionals with basic
        knowledge of neural networks and want to expand their knowledge in the
        world of Natural langauge processing|No prerequisite course required
      - >-
        Course Name:Nonlinear Regression in Python|Course Description:In this
        course, learners will practice implementing a variety of nonlinear
        regression techniques in Python to model complex relationships beyond
        simple linear patterns. They will learn to interpret key
        transformations, including logarithmic (log-log, log-linear) and
        polynomial models, and identify interaction effects between predictor
        variables. Through hands-on exercises, they will also develop practical
        skills in selecting, fitting, and validating the most appropriate
        nonlinear model for their data.|Tags:nonlinear, regression|Course
        language: Python|Target Audience:This is an intermediate level course
        for data scientists who want to learn to understand and estimate
        relationships between a set of independent variables and a continuous
        dependent variable.|Prerequisite course required: Multiple Linear
        Regression
      - >-
        Course Name:Data Visualization Design & Storytelling|Course
        Description:This course focuses on the fundamentals of data
        visualization, which helps support data-driven decision-making and to
        create a data-driven culture.|Tags:data driven culture, data analytics,
        data literacy, data quality, storytelling, data science|Course language:
        TBD|Target Audience:Professionals who would like to understand more
        about how to visualize data, design and concepts of storytelling through
        data.|No prerequisite course required
  - source_sentence: >-
      Foundations of Probability Theory in Python. This course guides learners
      through a comprehensive review of advanced statistics topics on
      probability, such as permutations and combinations, joint probability,
      conditional probability, and marginal probability. Learners will also
      become familiar with Bayes’ theorem, a rule that provides a way to
      calculate the probability of a cause given its outcome. By the end of this
      course, learners will also be able to assess the likelihood of events
      being independent to indicate whether further statistical analysis is
      likely to yield results.. tags: conditional probability, bayes' theorem.
      Languages: Course language: Python. Prerequisites: Prerequisite course
      required: Hypothesis Testing in Python. Target audience: Professionals
      some Python experience who would like to expand their skill set to more
      advanced Python visualization techniques and tools..
    sentences:
      - >-
        Course Name:Foundations of Probability Theory in Python|Course
        Description:This course guides learners through a comprehensive review
        of advanced statistics topics on probability, such as permutations and
        combinations, joint probability, conditional probability, and marginal
        probability. Learners will also become familiar with Bayes’ theorem, a
        rule that provides a way to calculate the probability of a cause given
        its outcome. By the end of this course, learners will also be able to
        assess the likelihood of events being independent to indicate whether
        further statistical analysis is likely to yield
        results.|Tags:conditional probability, bayes' theorem|Course language:
        Python|Target Audience:Professionals some Python experience who would
        like to expand their skill set to more advanced Python visualization
        techniques and tools.|Prerequisite course required: Hypothesis Testing
        in Python
      - >-
        Course Name:Foundations of Generative AI|Course Description:Foundations
        of Generative AI|Tags:Foundations, Generative, AI|Course language:
        None|Target Audience:No target audience|No prerequisite course required
      - >-
        Course Name:Data Science for Managers|Course Description:This course is
        designed for managers seeking to bolster their data literacy with a deep
        dive into data science tools and teams, project life cycles, and
        methods.|Tags:data driven culture, data analytics, data quality,
        storytelling, data science|Course language: TBD|Target Audience:This
        course is targeted for those who would like to understand more about
        data literacy, make more informed decisions and identify data-driven
        solutions through data science tools and methods.|No prerequisite course
        required

SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("datasocietyco/bge-base-en-v1.5-course-recommender-v4python")
# Run inference
sentences = [
    "Foundations of Probability Theory in Python. This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.. tags: conditional probability, bayes' theorem. Languages: Course language: Python. Prerequisites: Prerequisite course required: Hypothesis Testing in Python. Target audience: Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools..",
    "Course Name:Foundations of Probability Theory in Python|Course Description:This course guides learners through a comprehensive review of advanced statistics topics on probability, such as permutations and combinations, joint probability, conditional probability, and marginal probability. Learners will also become familiar with Bayes’ theorem, a rule that provides a way to calculate the probability of a cause given its outcome. By the end of this course, learners will also be able to assess the likelihood of events being independent to indicate whether further statistical analysis is likely to yield results.|Tags:conditional probability, bayes' theorem|Course language: Python|Target Audience:Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.|Prerequisite course required: Hypothesis Testing in Python",
    'Course Name:Foundations of Generative AI|Course Description:Foundations of Generative AI|Tags:Foundations, Generative, AI|Course language: None|Target Audience:No target audience|No prerequisite course required',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 48 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 48 samples:
    anchor positive
    type string string
    details
    • min: 49 tokens
    • mean: 188.12 tokens
    • max: 322 tokens
    • min: 47 tokens
    • mean: 186.12 tokens
    • max: 320 tokens
  • Samples:
    anchor positive
    Outlier Detection with DBSCAN in Python. Density-Based Spatial Clustering of Applications with Noise, or DBSCAN, contrasts groups of densely-packed data with points isolated in low-density regions. In this course, learners will discuss the optimal data conditions suited to this method of outlier detection. After discussing different basic varieties of anomaly detection, learners will implement DBSCAN to identify likely outliers. They will also use a balancing method called Synthetic Minority Oversampling Technique, or SMOTE, to generate additional examples of outliers and improve the anomaly detection model.. tags: outlier, SMOTE, anomaly, DBSCAN. Languages: Course language: Python. Prerequisites: Prerequisite course required: Intro to Clustering. Target audience: Professionals with some Python experience who would like to expand their skills to learn about various outlier detection techniques. Course Name:Outlier Detection with DBSCAN in Python
    Foundations of Python. This course introduces learners to the fundamentals of the Python programming language. Python is one of the most widely used computer languages in the world, helpful for building web-based applications, performing data analysis, and automating tasks. By the end of this course, learners will identify how data scientists use Python, distinguish among basic data types and data structures, and perform simple arithmetic and variable-related tasks.. tags: functions, basics, data-structures, control-flow. Languages: Course language: Python. Prerequisites: Prerequisite course required: Version Control with Git. Target audience: This is an introductory level course for data scientists who want to learn basics of Python and implement different data manipulation techniques using popular data wrangling Python libraries.. Course Name:Foundations of Python
    Text Generation with LLMs in Python. This course provides a practical introduction to the latest advancements in generative AI with a focus on text. To start, the course explores the use of reinforcement learning in natural language processing (NLP). Learners will delve into approaches for conversational and question-answering (QA) tasks, highlighting the capabilities, limitations, and use cases of models available in the Hugging Face library, such as Dolly v2. Finally, learners will gain hands-on experience in creating their own chatbot by using the concepts of Retrieval Augmented Generation (RAG) in LlamaIndex.. tags: course, provides, practical, introduction, latest, advancements, generative, AI, focus, text., start,, course, explores, use, reinforcement, learning, natural, language, processing, (NLP)., Learners, will, delve, into, approaches, conversational, question-answering, (QA), tasks,, highlighting, capabilities,, limitations,, use, cases, models, available, Hugging, Face, library,, such, as, Dolly, v2., Finally,, learners, will, gain, hands-on, experience, creating, their, own, chatbot, using, concepts, Retrieval, Augmented, Generation, (RAG), LlamaIndex.. Languages: Course language: None. Prerequisites: No prerequisite course required. Target audience: No target audience. Course Name:Text Generation with LLMs in Python
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 12 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 12 samples:
    anchor positive
    type string string
    details
    • min: 46 tokens
    • mean: 162.92 tokens
    • max: 363 tokens
    • min: 44 tokens
    • mean: 160.92 tokens
    • max: 361 tokens
  • Samples:
    anchor positive
    Fundamentals of Deep Learning for Multi GPUs. Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.. tags: multiple GPUs, neural networks, TensorFlow, parallelize. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals want to train deep neural networks on multi-GPU technology to shorten\nthe training time required for data-intensive applications. Course Name:Fundamentals of Deep Learning for Multi GPUs
    Building Transformer-Based NLP Applications (NVIDIA). Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks. In this course, you'll construct a Transformer neural network in PyTorch, Build a named-entity recognition (NER) application with BERT, Deploy the NER application with ONNX and TensorRT to a Triton inference server. Upon completion, you’ll be proficient i.n task-agnostic applications of Transformer-based models. Data Society's instructors are certified by NVIDIA’s Deep Learning Institute to teach this course.. tags: named-entity recognition, text, Natural language processing, classification, NLP, NER. Languages: Course language: Python. Prerequisites: No prerequisite course required. Target audience: Professionals with basic knowledge of neural networks and want to expand their knowledge in the world of Natural langauge processing. Course Name:Building Transformer-Based NLP Applications (NVIDIA)
    Nonlinear Regression in Python. In this course, learners will practice implementing a variety of nonlinear regression techniques in Python to model complex relationships beyond simple linear patterns. They will learn to interpret key transformations, including logarithmic (log-log, log-linear) and polynomial models, and identify interaction effects between predictor variables. Through hands-on exercises, they will also develop practical skills in selecting, fitting, and validating the most appropriate nonlinear model for their data.. tags: nonlinear, regression. Languages: Course language: Python. Prerequisites: Prerequisite course required: Multiple Linear Regression. Target audience: This is an intermediate level course for data scientists who want to learn to understand and estimate relationships between a set of independent variables and a continuous dependent variable.. Course Name:Nonlinear Regression in Python
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 3e-06
  • max_steps: 24
  • warmup_ratio: 0.1
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3.0
  • max_steps: 24
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
6.6667 20 0.046 0.0188

Framework Versions

  • Python: 3.9.13
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.1
  • PyTorch: 2.2.2
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.20.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}