File size: 1,017 Bytes
70d9cec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---

language: 
- en
- te  
- sa
library_name: transformers
pipeline_tag: text-generation
tags:
- multilingual
- fine-tuned
- deepseek
- entity-extraction
license: apache-2.0
---


# asrith05/slm

Fine-tuned multilingual model for entity extraction tasks.

## Model Details
- **Base**: DeepSeek architecture
- **Languages**: English, Telugu, Sanskrit  
- **Task**: Entity extraction
- **Size**: ~416MB

## Usage

```python

from transformers import AutoTokenizer, AutoModelForCausalLM



model_name = "asrith05/slm"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name)



# Example usage

prompt = "Extract entities: John works at Microsoft in Seattle."

inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=50)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

```

## Training
- Dataset: 30K examples (20K train, 5K val, 5K test)
- Epochs: 1
- Learning rate: 5e-5