sebawita commited on
Commit
2d5f709
·
verified ·
1 Parent(s): 218318b

Update readme - add AWS and Cohere datasets

Browse files
Files changed (1) hide show
  1. README.md +24 -2
README.md CHANGED
@@ -4,7 +4,11 @@ configs:
4
  - config_name: no-vectors
5
  data_files: no-vectors/*.parquet
6
  default: true
7
- - config_name: openai-text-embedding-3-small
 
 
 
 
8
  data_files: openai/text-embedding-3-small/*.parquet
9
  - config_name: openai-text-embedding-3-large
10
  data_files: openai/text-embedding-3-large/*.parquet
@@ -57,6 +61,24 @@ from datasets import load_dataset
57
  dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)
58
  ```
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ### OpenAI
61
 
62
  **text-embedding-3-small** - 1536d vectors - generated with OpenAI
@@ -75,7 +97,7 @@ dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-large",
75
 
76
  ### Snowflake
77
 
78
- **snowflake-arctic-embed** - 1024 vectors - generated with Ollama
79
 
80
  ```python
81
  from datasets import load_dataset
 
4
  - config_name: no-vectors
5
  data_files: no-vectors/*.parquet
6
  default: true
7
+ - config_name: aws-titan-embed-text-v2
8
+ data_files: aws/titan-embed-text-v2/*.parquet
9
+ - config_name: cohere-text-embedding-3-small
10
+ data_files: cohere/embed-multilingual-v3/*.parquet
11
+ - config_name: openai-embed-multilingual-v3
12
  data_files: openai/text-embedding-3-small/*.parquet
13
  - config_name: openai-text-embedding-3-large
14
  data_files: openai/text-embedding-3-large/*.parquet
 
61
  dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)
62
  ```
63
 
64
+ ### AWS
65
+
66
+ **aws-titan-embed-text-v2** - 1024d vectors - generated with AWS Bedrock
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+ dataset = load_dataset("weaviate/wiki-sample", "aws-titan-embed-text-v2", split="train", streaming=True)
71
+ ```
72
+
73
+ ### Cohere
74
+
75
+ **cohere-text-embedding-3-small** - 768d vectors - generated with Ollama
76
+
77
+ ```python
78
+ from datasets import load_dataset
79
+ dataset = load_dataset("weaviate/wiki-sample", "cohere-text-embedding-3-small", split="train", streaming=True)
80
+ ```
81
+
82
  ### OpenAI
83
 
84
  **text-embedding-3-small** - 1536d vectors - generated with OpenAI
 
97
 
98
  ### Snowflake
99
 
100
+ **snowflake-arctic-embed** - 1024d vectors - generated with Ollama
101
 
102
  ```python
103
  from datasets import load_dataset