SentenceTransformer based on sentence-transformers/all-distilroberta-v1
This is a sentence-transformers model finetuned from sentence-transformers/all-distilroberta-v1 on the ai_alignment dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-distilroberta-v1
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("pfrenee/distilroberta_ai_alignment")
queries = [
"Data engineering, ETL workflows, cloud-based data solutions",
]
documents = [
"Qualifications and Skills Education: Bachelor's degree in Computer Science or a related field. Experience: 5+ years in Software Engineering with a focus on Data Engineering. Technical Proficiency: Expertise in Python; familiarity with JavaScript and Java is beneficial. Proficient in SQL (Postgres, Presto/Trino dialects), ETL workflows, and workflow orchestration systems (e.g. Airflow, Prefect). Knowledge of modern data file formats (e.g. Parquet, Avro, ORC) and Python data tools (e.g. pandas, Dask, Ray). Cloud and Data Solutions: Experience in building cloud-based Data Warehouse/Data Lake solutions (AWS Athena, Redshift, Snowflake) and familiarity with AWS cloud services and infrastructure-as-code tools (CDK, Terraform). Communication Skills: Excellent communication and presentation skills, fluent in English. Work Authorization: Must be authorized to work in the US. \nWork Schedule Hybrid work schedule: Minimum 3 days per week in the San Francisco office (M/W/Th), with the option to work remotely 2 days per week. \nSalary Range: $165,000-$206,000 base depending on experience \nBonus: Up to 20% annual performance bonus \nGenerous benefits package: Fully paid healthcare, monthly reimbursements for gym, commuting, cell phone & home wifi.",
"Experience with LLMs and PyTorch: Extensive experience with large language models and proficiency in PyTorch.Expertise in Parallel Training and GPU Cluster Management: Strong background in parallel training methods and managing large-scale training jobs on GPU clusters.Analytical and Problem-Solving Skills: Ability to address complex challenges in model training and optimization.Leadership and Mentorship Capabilities: Proven leadership in guiding projects and mentoring team members.Communication and Collaboration Skills: Effective communication skills for conveying technical concepts and collaborating with cross-functional teams.Innovation and Continuous Learning: Passion for staying updated with the latest trends in AI and machine learning.\n\nWhat We Offer\n\nMarket competitive and pay equity-focused compensation structure100% paid health insurance for employees with 90% coverage for dependentsAnnual lifestyle wallet for personal wellness, learning and development, and more!Lifetime maximum benefit for family forming and fertility benefitsDedicated mental health support for employees and eligible dependentsGenerous time away including company holidays, paid time off, sick time, parental leave, and more!Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits\n\nBase pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands. The expected salary range for this role is based on the location where the work will be performed and is aligned to one of 3 compensation zones. This role is also eligible to participate in a Robinhood bonus plan and Robinhood’s equity plan. For other locations not listed, compensation can be discussed with your recruiter during the interview process.\n\nZone 1 (Menlo Park, CA; New York, NY; Bellevue, WA; Washington, DC)\n\n$187,000—$220,000 USD\n\nZone 2 (Denver, CO; Westlake, TX; Chicago, IL)\n\n$165,000—$194,000 USD\n\nZone 3 (Lake Mary, FL)\n\n$146,000—$172,000 USD\n\nClick Here To Learn More About Robinhood’s Benefits.\n\nWe’re looking for more growth-minded and collaborative people to be a part of our journey in democratizing finance for all. If you’re ready to give 100% in helping us achieve our mission—we’d love to have you apply even if you feel unsure about whether you meet every single requirement in this posting. At Robinhood, we're looking for people invigorated by our mission, values, and drive to change the world, not just those who simply check off all the boxes.\n\nRobinhood embraces a diversity of backgrounds and experiences and provides equal opportunity for all applicants and employees. We are dedicated to building a company that represents a variety of backgrounds, perspectives, and skills. We believe that the more inclusive we are, the better our work (and work environment) will be for everyone. Additionally, Robinhood provides reasonable accommodations for candidates on request and respects applicants' privacy rights. To review Robinhood's Privacy Policy please review the specific policy applicable to your country.",
"experience with Transformers\nNeed to be 8+ year's of work experience. \nWe need a Data Scientist with demonstrated expertise in training and evaluating transformers such as BERT and its derivatives.\nRequired: Proficiency with Python, pyTorch, Linux, Docker, Kubernetes, Jupyter. Expertise in Deep Learning, Transformers, Natural Language Processing, Large Language Models\nPreferred: Experience with genomics data, molecular genetics. Distributed computing tools like Ray, Dask, Spark",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
Evaluation
Metrics
Triplet
| Metric |
ai-job-validation |
ai-job-test |
| cosine_accuracy |
0.9802 |
0.9709 |
Training Details
Training Dataset
ai_alignment
Evaluation Dataset
ai_alignment
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
learning_rate: 1e-05
num_train_epochs: 6
warmup_ratio: 0.1
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 1e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 6
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: False
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
ai-job-validation_cosine_accuracy |
ai-job-test_cosine_accuracy |
| -1 |
-1 |
- |
- |
0.8614 |
- |
| 1.9608 |
100 |
0.848 |
0.3421 |
0.9802 |
- |
| 3.9216 |
200 |
0.3142 |
0.3138 |
0.9802 |
- |
| 5.8824 |
300 |
0.1828 |
0.3009 |
0.9802 |
- |
| -1 |
-1 |
- |
- |
0.9802 |
0.9709 |
Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.8.0
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}