barandinho's picture
Librarian Bot: Add language metadata for dataset (#2)
36472c3 verified
---
language:
- tr
---
This is the embedded version of [barandinho/wikipedia_tr](https://huggingface.co/datasets/barandinho/wikipedia_tr) the dataset was chunked (chunk_size=2048, chunk_overlap=256) and then put into an embedding model.\
The embedding model used for this dataset is [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) so you have to use it if you wanna do similarity search\
It is one of the best embedding model for Turkish according to our tests.\
Embedding dimension is 64 and values are int8
You can do similarity search with [usearch](https://github.com/unum-cloud/usearch) below is an example for similarity search given a query.
```python
#!pip install sentence-transformers datasets usearch
import numpy as np
from datasets import load_dataset
from usearch.index import Index
from sentence_transformers import SentenceTransformer
# Load dataset and corresponding embedding model
ds = load_dataset('barandinho/wikipedia_tr_embedded', split="train")
embd = SentenceTransformer("sentence-transformers/distiluse-base-multilingual-cased-v1", trust_remote_code=True)
# Get embeddings as a list to create usearch Index
dtype = np.int8
embeddings = [embedding for embedding in ds['embed_int8']]
embeddings = np.asarray(embeddings, dtype=dtype)
num_dim = 64
index = Index(ndim=num_dim, metric='cos')
index.add(np.arange(len(embeddings)), embeddings)
q = 'Fatih Sultan Mehmet' # quality of the query is very important for wanted results
q_embd = embd.encode(q, precision='binary')
q_embd = np.asarray(q_embd, dtype=dtype)
# Get top 3 results
matches = index.search(q_embd, 3)
for match in matches:
idx = int(match.key)
print(ds[idx]['title'])
print(ds[idx]['text'])
print("--"*10)
```
---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: embed_int8
sequence: int64
splits:
- name: train
num_bytes: 1252396519
num_examples: 762059
download_size: 501996398
dataset_size: 1252396519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---