| ther in use cases such as search where you want to compare shorter queries against larger documents. |
|
|
| `Document` embeddings are optimized for larger pieces of text to compare queries against. |
| Query: |
| `Document` and `Query` are used together in use cases such as search where you want to compare shorter queries against larger documents. |
|
|
| `Query` embeddings are optimized for shorter texts, such as questions or keywords. |
| """ |
|
|
| Symmetric = "symmetric" |
| Document = "document" |
| Query = "query" |
|
|
|
|
| @dataclass(frozen=True) |
| class SemanticEmbeddingRequest: |
| """ |
| Embeds a text and returns vectors that can be used for downstream tasks (e.g. semantic similarity) and models (e.g. classifiers). |
|
|
| Parameters: |
| prompt |
| The text and/or image(s) to be embedded. |
| representation |
| Semantic representation to embed the prompt with. |
| compress_to_size |
| Options available: 128 |
|
|
| The default behavior is to return the full embedding, but you can optionally request an embedding compressed to a smaller set of dimensions. |
|
|
| Full embedding sizes for supported models: |
| - luminous-base: 5120 |
|
|
| The 128 size is expected to have a small drop in accuracy performance (4-6%), with the benefit of being much smaller, which makes comparing these embeddings much faster for use cases where speed is critical. |
|
|
| The 128 size can also perform better if you are embedding really short texts or documents. |
|
|
| normalize |
| Return normalized embeddings. This can be used to save on additional compute when applying a cosine similarity metric. |
|
|
| Note that at the moment this parameter does not yet have any effect. This will change as soon as the |
| corresponding feature is available in the backend |
|
|
| contextual_control_threshold (float, default None) |
| If set to None, attention control parameters only apply to those tokens that have |
| explicitly been set in the request. |
| If set to a non-None value, we apply the control parameters to similar tokens as well. |
| Controls that have been applied to one token will then be applied to all other tokens |
| that have at least the similarity score defined by this parameter. |
| The similarity score is the cosine similarity of token embeddings. |
|
|
| control_log_additive (bool, default True) |
| True: apply control by adding the log(control_factor) to attention scores. |
| False: apply control by (attention_scores - - attention_scores.min(-1)) * control_factor |
|
|
| Examples |
| >>> texts = [ |
| "deep learning", |
| "artificial intelligence", |
| "deep diving", |
| "artificial snow", |
| ] |
| >>> # Texts to compare |
| >>> embeddings = [] |
| >>> for text in texts: |
| request = SemanticEmbeddingRequest(prompt=Prompt.from_text(text), representation=SemanticRepresentation.Symmetric) |
| result = model.semantic_embed(request) |
| embeddings.append(result.embedding) |
| """ |
|
|
| prompt: Prompt |
| representation: SemanticRepresentation |
| compress_to_size: Optional[int] = None |
| normalize: bool = False |
| contextual_control_threshold: Optional[float] = None |
| control_log_additive: Optional[bool] = True |
|
|
| def to_json(self) -> Mapping[str, Any]: |
| return { |
| **self._asdict(), |
| "representation": self.representation.value, |
| "prompt": self.prompt.to_json(), |
| } |
|
|
| def _asdict(self) -> Mapping[str, Any]: |
| return asdict(self) |
|
|
|
|
| @dataclass(frozen=True) |
| class BatchSemanticEmbeddingRequest: |
| """ |
| Embeds multiple multi-modal prompts and returns their embeddings in the same order as they were supplied. |
|
|
| Parameters: |
| prompts |
| A list of texts and/or images to be embedded. |
| representation |
| Semantic representation to embed the prompt with. |
| compress_to_size |
| Options available: 128 |
|
|
| The default behavior is to return the full embedding, but you can optionally request an embedding compressed to a smaller set of dimensions. |
|
|
| Full embedding sizes for supported models: |
| - luminous-base: 5120 |
|
|
| The 128 size is expected to have a small drop in accuracy performance (4-6%), with the benefit of being much smaller, which makes comparing these embeddings much faster for use cases where spe |