metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:64
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
1. What is the first step to take when implementing architecture as code
according to the provided context?
2. How should the content of each file be formatted when outputting code?
sentences:
- >-
architecture is, in the end, implemented as code.\\n\\nThink step by
step and reason yourself to the right decisions to make sure we get it
right.\\nYou will first lay out the names of the core classes,
functions, methods that will be necessary, as well as a quick comment on
their purpose.\\n\\nThen you will output the content of each file
including ALL code.\\nEach file must strictly follow a markdown code
block format, where the following tokens must be replaced such
that\\nFILENAME is the lowercase file name including the file
extension,\\nLANG is the markup code block language for the code\'s
language, and CODE is the
code:\\n\\nFILENAME\\n\`\`\`LANG\\nCODE\\n\`\`\`\\n\\nYou will start
with the \\"entrypoint\\" file, then go to the
- >-
Stream tokens:
for message, metadata in graph.stream( {"question": "What is Task
Decomposition?"}, stream_mode="messages"): print(message.content,
end="|")
|Task| decomposition| is| the| process| of| breaking| down| complex|
tasks| into| smaller|,| more| manageable| steps|.| It| can| be|
achieved| through| techniques| like| Chain| of| Thought| (|Co|T|)|
prompting|,| which| encourages| the| model| to| think| step| by| step|,|
or| through| more| structured| methods| like| the| Tree| of| Thoughts|.|
This| approach| not| only| simplifies| task| execution| but| also|
provides| insights| into| the| model|'s| reasoning| process|.||
tipFor async invocations, use:result = await graph.ainvoke(...)andasync
for step in graph.astream(...):
- >-
return {"answer": response.content}graph_builder =
StateGraph(State).add_sequence([analyze_query, retrieve,
generate])graph_builder.add_edge(START, "analyze_query")graph =
graph_builder.compile()
- source_sentence: >-
1. What is the purpose of the DocumentTransformer object in the context
provided?
2. Where can one find detailed documentation on how to use
DocumentTransformers?
sentences:
- >-
Learn more about splitting text using different methods by reading the
how-to docs
Code (py or js)
Scientific papers
Interface: API reference for the base interface.
DocumentTransformer: Object that performs a transformation on a list
of Document objects.
Docs: Detailed documentation on how to use DocumentTransformers
Integrations
Interface: API reference for the base interface.
- >-
{'retrieve': {'context':
[Document(id='a42dc78b-8f76-472a-9e25-180508af74f3', metadata={'source':
'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index':
1585}, page_content='Fig. 1. Overview of a LLM-powered autonomous agent
system.\nComponent One: Planning#\nA complicated task usually involves
many steps. An agent needs to know what they are and plan ahead.\nTask
Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a
standard prompting technique for enhancing model performance on complex
tasks. The model is instructed to “think step by step” to utilize more
test-time computation to decompose hard tasks into smaller and simpler
steps. CoT transforms big tasks into multiple manageable tasks and shed
lights into
- >-
Do I need to use LangGraph?LangGraph is not required to build a RAG
application. Indeed, we can implement the same application logic through
invocations of the individual components:question = "..."retrieved_docs
= vector_store.similarity_search(question)docs_content =
"\n\n".join(doc.page_content for doc in retrieved_docs)prompt =
prompt.invoke({"question": question, "context": docs_content})answer =
llm.invoke(prompt)The benefits of LangGraph include:
Support for multiple invocation modes: this logic would need to be
rewritten if we wanted to stream output tokens, or stream the results of
individual steps;
Automatic support for tracing via LangSmith and deployments via
LangGraph Platform;
- source_sentence: >-
1. What mode did the agent move into after the clarifications were made?
2. What instructions were given to the agent regarding the code writing
process?
sentences:
- >-
= RecursiveCharacterTextSplitter(chunk_size=1000,
chunk_overlap=200)all_splits = text_splitter.split_documents(docs)#
Update metadata (illustration purposes)total_documents =
len(all_splits)third = total_documents // 3for i, document in
enumerate(all_splits): if i < third:
document.metadata["section"] = "beginning" elif i < 2 * third:
document.metadata["section"] = "middle" else:
document.metadata["section"] = "end"# Index chunksvector_store =
InMemoryVectorStore(embeddings)_ =
vector_store.add_documents(all_splits)# Define schema for searchclass
Search(TypedDict): """Search query.""" query: Annotated[str, ...,
"Search query to run."] section: Annotated[
Literal["beginning", "middle", "end"],
- >-
limitations:'), Document(id='ca7f06e4-2c2e-4788-9a81-2418d82213d9',
metadata={'source':
'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index':
32942, 'section': 'end'}, page_content='}\n]\nThen after these
clarification, the agent moved into the code writing mode with a
different system message.\nSystem message:'),
Document(id='1fcc2736-30f4-4ef6-90f2-c64af92118cb', metadata={'source':
'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index':
35127, 'section': 'end'}, page_content='"content": "You will get
instructions for code to write.\\nYou will write a very long answer.
Make sure that every detail of the architecture is, in the end,
implemented as code.\\nMake sure that every detail of the architecture
is,
- >-
Build a Retrieval Augmented Generation (RAG) App: Part 1 | 🦜️🔗
LangChain
- source_sentence: |-
1. What is the purpose of the `getpass` module in the provided context?
2. How is the chat model initialized in the given code snippet?
sentences:
- >-
Select chat model:Groq▾GroqOpenAIAnthropicAzureGoogle
VertexAWSCohereNVIDIAFireworks AIMistral AITogether AIIBM
watsonxDatabrickspip install -qU "langchain[groq]"import getpassimport
osif not os.environ.get("GROQ_API_KEY"): os.environ["GROQ_API_KEY"] =
getpass.getpass("Enter API key for Groq: ")from langchain.chat_models
import init_chat_modelllm = init_chat_model("llama3-8b-8192",
model_provider="groq")
- >-
One of the most powerful applications enabled by LLMs is sophisticated
question-answering (Q&A) chatbots. These are applications that can
answer questions about specific source information. These applications
use a technique known as Retrieval Augmented Generation, or RAG.
This is a multi-part tutorial:
- >-
user's request in a straightforward manner. Then describe the task
process and show your analysis and model inference results to the user
in the first person. If inference results contain a file path, must tell
the user the complete file path.")]}}----------------{'generate':
{'answer': 'Task decomposition is the process of breaking down a complex
task into smaller, more manageable steps. This technique, often enhanced
by methods like Chain of Thought (CoT) or Tree of Thoughts, allows
models to reason through tasks systematically and improves performance
by clarifying the thought process. It can be achieved through simple
prompts, task-specific instructions, or human inputs.'}}----------------
- source_sentence: >-
1. How do chat models utilize the state of the graph to recover sources
for generated answers?
2. What is the significance of the "context" field in the state when
returning sources?
sentences:
- |-
Docs: Detailed documentation on how to use embeddings.
Integrations: 30+ integrations to choose from.
Interface: API reference for the base interface.
VectorStore: Wrapper around a vector database, used for storing and
querying embeddings.
Docs: Detailed documentation on how to use vector stores.
Integrations: 40+ integrations to choose from.
Interface: API reference for the base interface.
- >-
Returning sources
Note that by storing the retrieved context in the state of the graph, we
recover sources for the model's generated answer in the "context" field
of the state. See this guide on returning sources for more detail.
Go deeper
Chat models take in a sequence of messages and return a message.
- display(Image(graph.get_graph().draw_mermaid_png()))
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 1
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1
name: Cosine Precision@10
- type: cosine_recall@1
value: 1
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1
name: Cosine Mrr@10
- type: cosine_map@100
value: 1
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Rsr2425/simplify-ft-arctic-embed-l")
# Run inference
sentences = [
'1. How do chat models utilize the state of the graph to recover sources for generated answers? \n2. What is the significance of the "context" field in the state when returning sources?',
'Returning sources\u200b\nNote that by storing the retrieved context in the state of the graph, we recover sources for the model\'s generated answer in the "context" field of the state. See this guide on returning sources for more detail.\nGo deeper\u200b\nChat models take in a sequence of messages and return a message.',
'Docs: Detailed documentation on how to use embeddings.\nIntegrations: 30+ integrations to choose from.\nInterface: API reference for the base interface.\n\nVectorStore: Wrapper around a vector database, used for storing and\nquerying embeddings.\n\nDocs: Detailed documentation on how to use vector stores.\nIntegrations: 40+ integrations to choose from.\nInterface: API reference for the base interface.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| cosine_map@100 | 1.0 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 64 training samples
- Columns:
sentence_0andsentence_1 - Approximate statistics based on the first 64 samples:
sentence_0 sentence_1 type string string details - min: 23 tokens
- mean: 37.42 tokens
- max: 49 tokens
- min: 19 tokens
- mean: 153.86 tokens
- max: 286 tokens
- Samples:
sentence_0 sentence_1 1. How do chat models utilize the state of the graph to recover sources for generated answers?
2. What is the significance of the "context" field in the state when returning sources?Returning sources
Note that by storing the retrieved context in the state of the graph, we recover sources for the model's generated answer in the "context" field of the state. See this guide on returning sources for more detail.
Go deeper
Chat models take in a sequence of messages and return a message.1. What is the purpose of the indexing process in the data pipeline?
2. How does the retrieval and generation phase utilize the indexed data to respond to user queries?Indexing: a pipeline for ingesting data from a source and indexing it. This usually happens offline.
Retrieval and generation: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.
Note: the indexing portion of this tutorial will largely follow the semantic search tutorial.
The most common full sequence from raw data to answer looks like:
Indexing1. What is task decomposition and how does it help in problem-solving?
2. Can you explain the methods used in task decomposition, such as chain of thought prompting and the tree of thoughts approach?user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.")]Answer: Task decomposition is a technique used to break down complex tasks into smaller, manageable steps, allowing for more efficient problem-solving. This can be achieved through methods like chain of thought prompting or the tree of thoughts approach, which explores multiple reasoning possibilities at each step. It can be initiated through simple prompts, task-specific instructions, or human inputs. - Loss:
MatryoshkaLosswith these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 10multi_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 10max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
Training Logs
| Epoch | Step | cosine_ndcg@10 |
|---|---|---|
| 1.0 | 4 | 1.0 |
| 2.0 | 8 | 1.0 |
| 3.0 | 12 | 1.0 |
| 4.0 | 16 | 1.0 |
| 5.0 | 20 | 1.0 |
| 6.0 | 24 | 1.0 |
| 7.0 | 28 | 1.0 |
| 8.0 | 32 | 1.0 |
| 9.0 | 36 | 1.0 |
| 10.0 | 40 | 1.0 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}