Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,41 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
MTEB is a Python framework for evaluating embeddings and retrieval systems for both text and image.
|
| 11 |
+
MTEB covers more than 1000 languages and diverse tasks, from classics like classification and clustering to use-case specialized tasks such as legal, code, or healthcare retrieval.
|
| 12 |
+
|
| 13 |
+
You can get started using [`mteb`](https://github.com/embeddings-benchmark/mteb), check out our [documentation](https://embeddings-benchmark.github.io/mteb/usage/get_started/).
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
| Overview | |
|
| 17 |
+
|--------------------------------|--------------------------------------------------------------------------------------|
|
| 18 |
+
| π [Leaderboard] | The interactive leaderboard of the benchmark |
|
| 19 |
+
| **Get Started**. | |
|
| 20 |
+
| π [Get Started] | Overview of how to use mteb |
|
| 21 |
+
| π€ [Defining Models] | How to use existing model and define custom ones |
|
| 22 |
+
| π [Selecting tasks] | How to select tasks, benchmarks, splits etc. |
|
| 23 |
+
| π [Running Evaluation] | How to run the evaluations, including cache management, speeding up evaluations etc. |
|
| 24 |
+
| π [Loading Results] | How to load and work with existing model results |
|
| 25 |
+
| **Overview**. | |
|
| 26 |
+
| π [Tasks] | Overview of available tasks |
|
| 27 |
+
| π [Benchmarks] | Overview of available benchmarks |
|
| 28 |
+
| π€ [Models] | Overview of available Models |
|
| 29 |
+
| **Contributing** | |
|
| 30 |
+
| π€ [Adding a model] | How to submit a model to MTEB and to the leaderboard |
|
| 31 |
+
| π©βπ» [Adding a dataset] | How to add a new task/dataset to MTEB |
|
| 32 |
+
| π©βπ» [Adding a benchmark] | How to add a new benchmark to MTEB and to the leaderboard |
|
| 33 |
+
| π€ [Contributing] | How to contribute to MTEB and set it up for development |
|
| 34 |
+
|
| 35 |
+
[Get Started]: https://embeddings-benchmark.github.io/mteb/usage/get_started/
|
| 36 |
+
[Defining Models]: https://embeddings-benchmark.github.io/mteb/usage/defining_the_model/
|
| 37 |
+
[Selecting tasks]: https://embeddings-benchmark.github.io/mteb/usage/selecting_tasks/
|
| 38 |
+
[Running Evaluation]: https://embeddings-benchmark.github.io/mteb/usage/running_the_evaluation/
|
| 39 |
+
[Loading Results]: https://embeddings-benchmark.github.io/mteb/usage/loading_results/
|
| 40 |
+
[Tasks]: https://embeddings-benchmark.github.io/mteb/overview/available_tasks/any2anymultilingualretrieval/
|
| 41 |
+
[Benchmarks]: https://embeddings-benchmark.github.io/mteb/overview/available_benchmarks/
|
| 42 |
+
[Models]: https://embeddings-benchmark.github.io/mteb/overview/available_models/text/
|
| 43 |
+
[Contributing]: CONTRIBUTING.md
|
| 44 |
+
[Adding a model]: docs/adding_a_model.md
|
| 45 |
+
[Adding a dataset]: docs/adding_a_dataset.md
|
| 46 |
+
[Adding a benchmark]: docs/adding_a_benchmark.md
|
| 47 |
+
[Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard
|