| language: | |
| - en | |
| - te | |
| - sa | |
| library_name: transformers | |
| pipeline_tag: text-generation | |
| tags: | |
| - multilingual | |
| - fine-tuned | |
| - deepseek | |
| - entity-extraction | |
| license: apache-2.0 | |
| # asrith05/slm | |
| Fine-tuned multilingual model for entity extraction tasks. | |
| ## Model Details | |
| - **Base**: DeepSeek architecture | |
| - **Languages**: English, Telugu, Sanskrit | |
| - **Task**: Entity extraction | |
| - **Size**: ~416MB | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| model_name = "asrith05/slm" | |
| tokenizer = AutoTokenizer.from_pretrained(model_name) | |
| model = AutoModelForCausalLM.from_pretrained(model_name) | |
| # Example usage | |
| prompt = "Extract entities: John works at Microsoft in Seattle." | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=50) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| ## Training | |
| - Dataset: 30K examples (20K train, 5K val, 5K test) | |
| - Epochs: 1 | |
| - Learning rate: 5e-5 | |