sm-riti16's picture
Update README.md
8c7be3d verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:14131
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: >-
      Honors Thesis I. Business students with outstanding academic records may
      undertake an Honors Thesis. The topic is of the student's choice but must
      have some original aspect in the question being explored, the data set, or
      in the methods that are used. It must also be of sufficient academic rigor
      to meet the approval of a faculty advisor with expertise in the project's
      area. Students enroll each semester in a 9-unit independent study course
      with their faculty advisor for the project (70-500 in the fall and 70-501
      in the spring). Students and their faculty advisor develop a course
      description for the project and submit it for approval as two 9-unit
      courses to the BA department. Enrollment by permission of the BA Program.
      Industry: business & management. Level: advanced.
    sentences:
      - project management
      - statistics
      - natural language processing
  - source_sentence: 'Psychology of Sleep. TBA Industry: psychology. Level: intermediate.'
    sentences:
      - scientific computing
      - decision making
      - user research
  - source_sentence: >-
      Transition Design. Designing for Systems-Level Change. This course will
      provide an overview of the emerging field of Transition Design, which
      proposes societal transitions toward more sustainable futures. The idea of
      intentional (designed) societal transitions has become a global meme and
      involves an understanding of the complex dynamics of
      socio-technical-ecological systems which form the context for many of
      todays wicked problems (climate change, loss of biodiversity, pollution,
      growing gap between rich/poor, etc.).Through a mix of lecture, readings,
      classroom activities and projects, students will be introduced to the
      emerging Transition Design process which focuses on framing problems in
      large, spatio-temporal contexts, resolving conflict among stakeholder
      groups and facilitating the co-creation, and transition towards,
      desirable, long-term futures. This course will prepare students for work
      in transdisciplinary teams to address large, societal problems that
      require a deep understanding of the anatomy and dynamics of complex
      systems. Industry: design & hci. Level: advanced.
    sentences:
      - hardware prototyping
      - stakeholder management
      - mathematical modeling
  - source_sentence: >-
      Advanced Biochemistry. This is a special topics course in which selected
      topics in biochemistry will be analyzed in depth with emphasis on class
      discussion of papers from the recent research literature. Topics change
      yearly. Recent topics have included single molecule analysis of catalysis
      and conformational changes; intrinsically disordered proteins; cooperative
      interactions of aspartate transcarbamoylase; and the mechanism of
      ribosomal protein synthesis. Industry: biological sciences. Level:
      advanced.
    sentences:
      - control systems
      - vector calculus
      - user research
  - source_sentence: >-
      Metrics for Technology Products & Services. The Metrics for Technology
      Products & Services course provides an in-depth understanding and practice
      of applying metrics to plan and track the development of technology
      products and services and improve them over time by managing their market
      performance and value delivery. The course utilizes a business lens to
      understand and leverage metrics to generate questions and provide answers
      to meet business and customer goals, including delivered value and
      performance outcomes. Students will be exposed to a set of metrics
      architectures and their specific applications at different levels of work
      aggregation, namely team, program, and portfolio. Value stream mapping and
      analysis will be taught to identify opportunities for delivering value via
      adoption, cost reductions, and organizational capabilities. Through
      team-oriented case study assignments, students can select and design
      metrics systems to address business needs and value generation for product
      and service development and operations. Industry: business & management.
      Level: advanced.
    sentences:
      - industrial engineering
      - presentation skills
      - product design
pipeline_tag: sentence-similarity
library_name: sentence-transformers

Skill SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 within Skill Taxonomy embedding space

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering within the skill taxonomy space.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sm-riti16/course-skill-bi-encoder")
# Run inference
sentences = [
    'Metrics for Technology Products & Services. The Metrics for Technology Products & Services course provides an in-depth understanding and practice of applying metrics to plan and track the development of technology products and services and improve them over time by managing their market performance and value delivery. The course utilizes a business lens to understand and leverage metrics to generate questions and provide answers to meet business and customer goals, including delivered value and performance outcomes. Students will be exposed to a set of metrics architectures and their specific applications at different levels of work aggregation, namely team, program, and portfolio. Value stream mapping and analysis will be taught to identify opportunities for delivering value via adoption, cost reductions, and organizational capabilities. Through team-oriented case study assignments, students can select and design metrics systems to address business needs and value generation for product and service development and operations. Industry: business & management. Level: advanced.',
    'product design',
    'presentation skills',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3146, 0.2180],
#         [0.3146, 1.0000, 0.5224],
#         [0.2180, 0.5224, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 14,131 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 14 tokens
    • mean: 150.13 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 4.14 tokens
    • max: 9 tokens
  • Samples:
    sentence_0 sentence_1
    Design Practicum. This course provides 3 units of pass/fail credit for students participating in a design internship. The student must be registered for this course during the internship, in order to earn the credit. In the summer semester, the course must be paid for as an additional course, as summer courses are not part of the normal fall/spring academic year. At the end of the term, the student's supervisor must email the course coordinator with a brief statement describing the student's activities, and an evaluation of the student's performance. Students are required to submit a statement, reflecting on insights gained from the internship experience. Upon receipt of both statements, the course coordinator will assign a grade of either P or N, depending on the outcome. Industry: design & hci. Level: intermediate. data analysis
    Service Design. In this course, we will collectively define and study services and product service systems, and learn the basics of designing them. We will do this through lectures, studio projects, and verbal and written exposition. Classwork will be done individually and in teams. Industry: design & hci. Level: advanced. project management
    Study Abroad. Students are encouraged to pursue various international collaborative programs offered through the department of Electrical and Computer Engineering. Industry: electrical & computer engineering. Level: intro. industrial engineering
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
2.2624 500 3.114

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.2
  • PyTorch: 2.9.1+cpu
  • Accelerate: 1.12.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}