Spaces:
Running
Running
Update README.md
Browse filesVery first sketch of Organizational Card.
For the sinequa presentation I use the one from the recent press release. Feel free to modify it.
@basilevc
@skirres
@claeyzre
README.md
CHANGED
|
@@ -7,4 +7,77 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
Sinequa Hugging Face homepage
|
| 11 |
+
|
| 12 |
+
# About Sinequa
|
| 13 |
+
|
| 14 |
+
Sinequa transforms how work gets done. Sinequa’s Assistants augment your company by augmenting employees with a knowledgeable,
|
| 15 |
+
accurate, secure work partner so they are more effective, more informed, more productive, and less stressed. Best of all,
|
| 16 |
+
Sinequa Assistants streamline workflows and automatically navigate the chaotic enterprise information landscape, so that employees
|
| 17 |
+
can skip the grind and focus on doing the kind of work that makes the most impact. Sinequa’s Assistants achieve this by combining
|
| 18 |
+
the power of comprehensive enterprise search with the ease of generative AI in a configurable and easily managed Assistant framework,
|
| 19 |
+
for an accurate, traceable, and fully secure conversational experience. Deploy an out-of-the-box Assistant or configure a tailored
|
| 20 |
+
experience and specialized workflow to augment your people and your company. For more information, visit www.sinequa.com.
|
| 21 |
+
|
| 22 |
+
# Neural Search models
|
| 23 |
+
|
| 24 |
+
Sinequa Search relies on a technology called Neural Search. Neural Search is an hybrid search solution based on both Key Word Search and Vector Searched.
|
| 25 |
+
This search workflow implies thre type of models for which we deliver various version here.
|
| 26 |
+
|
| 27 |
+
The two collections below bring together the recommended model combinations for English only and multilingual context.
|
| 28 |
+
|
| 29 |
+
## Vectorizer
|
| 30 |
+
|
| 31 |
+
Vectorizers are models which produce an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the
|
| 32 |
+
query vector is used at query time to look up relevant passages in the index.
|
| 33 |
+
|
| 34 |
+
Here is an overview of the model we deliver publicly here.
|
| 35 |
+
|
| 36 |
+
Here’s the markdown table without formatting:
|
| 37 |
+
|
| 38 |
+
| Model | Languages | Relevance | Inference Time | GPU Memory |
|
| 39 |
+
|--------------------------------|-----------------------------|-----------|----------------|------------|
|
| 40 |
+
| vectorizer-v1-S-en | en | 0.456 | 52 ms | 330 MiB |
|
| 41 |
+
| vectorizer-v1-S-multilingual | de, en, es, fr | 0.448 | 51 ms | 580 MiB |
|
| 42 |
+
| vectorizer.vanilla | en | 0.639 | 53 ms | 330 MiB |
|
| 43 |
+
| vectorizer.raspberry | de, en, es, fr, it, ja, nl, pt, zs | 0.613 | 52 ms | 610 MiB |
|
| 44 |
+
| vectorizer.hazelnut | de, en, es, fr, it, ja, nl, pt, zs, pl | 0.590 | 52 ms | 610 MiB |
|
| 45 |
+
|
| 46 |
+
## Passage Ranker
|
| 47 |
+
|
| 48 |
+
Passage Rankers are models which produce a relevance score given a query-passage pair and is used to order search results coming from Key Word and Vector search.
|
| 49 |
+
|
| 50 |
+
Here is an overview of the model we deliver publicly here.
|
| 51 |
+
|
| 52 |
+
Here’s the table in markdown format:
|
| 53 |
+
|
| 54 |
+
| Model | Languages | Relevance | Inference Time | GPU Memory |
|
| 55 |
+
|---------------------------------|-----------------------------|-----------|----------------|------------|
|
| 56 |
+
| passage-ranker-v1-XS-en | en | 0.438 | 20 ms | 170 MiB |
|
| 57 |
+
| passage-ranker-v1-XS-multilingual | de, en, es, fr | 0.453 | 21 ms | 300 MiB |
|
| 58 |
+
| passage-ranker-v1-L-en | en | 0.466 | 356 ms | 1060 MiB |
|
| 59 |
+
| passage-ranker-v1-L-multilingual | de, en, es, fr | 0.471 | 357 ms | 1130 MiB |
|
| 60 |
+
| passage-ranker.chocolate | en | 0.484 | 64 ms | 550 MiB |
|
| 61 |
+
| passage-ranker.strawberry | de, en, es, fr, it, ja, nl, pt, zs | 0.451 | 63 ms | 1060 MiB |
|
| 62 |
+
| passage-ranker.mango | de, en, es, fr, it, ja, nl, pt, zs | 0.480 | 358 ms | 1070 MiB |
|
| 63 |
+
| passage-ranker.pistachio | de, en, es, fr, it, ja, nl, pt, zs, pl | 0.380 | 358 ms | 1070 MiB |
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
## Answer Finder
|
| 67 |
+
|
| 68 |
+
Answer Finder are extractive question answering models developed by Sinequa. Given a query and a passage, they produce two lists of logit scores corresponding
|
| 69 |
+
to the start token and end token of an answer.
|
| 70 |
+
|
| 71 |
+
Here is an overview of the model we deliver publicly here.
|
| 72 |
+
|
| 73 |
+
| Model | Languages | de | en | es | fr | ja |
|
| 74 |
+
|--------------------------------|-----------------|------|------|------|------|------|
|
| 75 |
+
| answer-finder-v1-S-en | en | 70.6 | 79.5 | 54.1 | 0.5 | X |
|
| 76 |
+
| answer-finder-v1-L-multilingual | de, en, es, fr | 90.8 | 75.0 | 67.1 | 73.4 | X |
|
| 77 |
+
| answer-finder.yuzu | ja | X | X | X | X | 91.5 |
|
| 78 |
+
|
| 79 |
+
| Model | Inference Time | GPU Memory |
|
| 80 |
+
|--------------------------------|----------------|------------|
|
| 81 |
+
| answer-finder-v1-S-en | 128 ms | 560 MiB |
|
| 82 |
+
| answer-finder-v1-L-multilingual | 362 ms | 1060 MiB |
|
| 83 |
+
| answer-finder.yuzu | 361 ms | 1320 MiB |
|