File size: 2,209 Bytes
36472c3
 
 
 
3f43118
1008ec0
 
 
 
 
 
 
 
 
 
 
 
 
 
b5985b8
1008ec0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac9ebf
1008ec0
 
 
 
 
 
 
 
 
3828482
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- tr
---
This is the embedded version of [barandinho/wikipedia_tr](https://huggingface.co/datasets/barandinho/wikipedia_tr) the dataset was chunked (chunk_size=2048, chunk_overlap=256) and then put into an embedding model.\
The embedding model used for this dataset is [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) so you have to use it if you wanna do similarity search\
It is one of the best embedding model for Turkish according to our tests.\
Embedding dimension is 64 and values are int8

You can do similarity search with [usearch](https://github.com/unum-cloud/usearch) below is an example for similarity search given a query.

```python
#!pip install sentence-transformers datasets usearch
import numpy as np
from datasets import load_dataset
from usearch.index import Index
from sentence_transformers import SentenceTransformer

# Load dataset and corresponding embedding model
ds = load_dataset('barandinho/wikipedia_tr_embedded', split="train")
embd = SentenceTransformer("sentence-transformers/distiluse-base-multilingual-cased-v1", trust_remote_code=True)

# Get embeddings as a list to create usearch Index
dtype = np.int8
embeddings = [embedding for embedding in ds['embed_int8']]
embeddings = np.asarray(embeddings, dtype=dtype)

num_dim = 64
index = Index(ndim=num_dim, metric='cos')
index.add(np.arange(len(embeddings)), embeddings)

q = 'Fatih Sultan Mehmet' # quality of the query is very important for wanted results
q_embd = embd.encode(q, precision='binary')
q_embd = np.asarray(q_embd, dtype=dtype)

# Get top 3 results
matches = index.search(q_embd, 3)

for match in matches:
    idx = int(match.key)
    print(ds[idx]['title'])
    print(ds[idx]['text'])
    print("--"*10)

```

---
dataset_info:
  features:
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: embed_int8
    sequence: int64
  splits:
  - name: train
    num_bytes: 1252396519
    num_examples: 762059
  download_size: 501996398
  dataset_size: 1252396519
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---