KennethEnevoldsen commited on
Commit
240921d
Β·
verified Β·
1 Parent(s): 13c28f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -7,4 +7,41 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ MTEB is a Python framework for evaluating embeddings and retrieval systems for both text and image.
11
+ MTEB covers more than 1000 languages and diverse tasks, from classics like classification and clustering to use-case specialized tasks such as legal, code, or healthcare retrieval.
12
+
13
+ You can get started using [`mteb`](https://github.com/embeddings-benchmark/mteb), check out our [documentation](https://embeddings-benchmark.github.io/mteb/usage/get_started/).
14
+
15
+
16
+ | Overview | |
17
+ |--------------------------------|--------------------------------------------------------------------------------------|
18
+ | πŸ“ˆ [Leaderboard] | The interactive leaderboard of the benchmark |
19
+ | **Get Started**. | |
20
+ | πŸƒ [Get Started] | Overview of how to use mteb |
21
+ | πŸ€– [Defining Models] | How to use existing model and define custom ones |
22
+ | πŸ“‹ [Selecting tasks] | How to select tasks, benchmarks, splits etc. |
23
+ | 🏭 [Running Evaluation] | How to run the evaluations, including cache management, speeding up evaluations etc. |
24
+ | πŸ“Š [Loading Results] | How to load and work with existing model results |
25
+ | **Overview**. | |
26
+ | πŸ“‹ [Tasks] | Overview of available tasks |
27
+ | πŸ“ [Benchmarks] | Overview of available benchmarks |
28
+ | πŸ€– [Models] | Overview of available Models |
29
+ | **Contributing** | |
30
+ | πŸ€– [Adding a model] | How to submit a model to MTEB and to the leaderboard |
31
+ | πŸ‘©β€πŸ’» [Adding a dataset] | How to add a new task/dataset to MTEB |
32
+ | πŸ‘©β€πŸ’» [Adding a benchmark] | How to add a new benchmark to MTEB and to the leaderboard |
33
+ | 🀝 [Contributing] | How to contribute to MTEB and set it up for development |
34
+
35
+ [Get Started]: https://embeddings-benchmark.github.io/mteb/usage/get_started/
36
+ [Defining Models]: https://embeddings-benchmark.github.io/mteb/usage/defining_the_model/
37
+ [Selecting tasks]: https://embeddings-benchmark.github.io/mteb/usage/selecting_tasks/
38
+ [Running Evaluation]: https://embeddings-benchmark.github.io/mteb/usage/running_the_evaluation/
39
+ [Loading Results]: https://embeddings-benchmark.github.io/mteb/usage/loading_results/
40
+ [Tasks]: https://embeddings-benchmark.github.io/mteb/overview/available_tasks/any2anymultilingualretrieval/
41
+ [Benchmarks]: https://embeddings-benchmark.github.io/mteb/overview/available_benchmarks/
42
+ [Models]: https://embeddings-benchmark.github.io/mteb/overview/available_models/text/
43
+ [Contributing]: CONTRIBUTING.md
44
+ [Adding a model]: docs/adding_a_model.md
45
+ [Adding a dataset]: docs/adding_a_dataset.md
46
+ [Adding a benchmark]: docs/adding_a_benchmark.md
47
+ [Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard