| | --- |
| | base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit |
| | library_name: peft |
| | license: apache-2.0 |
| | datasets: |
| | - ragul2607/history-llm |
| | language: |
| | - en |
| | tags: |
| | - HISTORY |
| | - INSTRUCTION |
| | - TAMIL NADU SSLC |
| | - LLM |
| | - FINE-TUNING |
| | --- |
| | |
| | # Model Card for SicMundus |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| | **SicMundus** is a fine-tuned version of `unsloth/Llama-3.2-1B-Instruct`, optimized for historical instruction-following tasks, particularly those aligned with Tamil Nadu State Board-style history education. Using PEFT with LoRA, it has been trained on the `ragul2607/history-llm` dataset. The goal is to deliver domain-specific, accurate, and relevant historical responses. |
| |
|
| | - **Developed by:** Ragul |
| | - **Funded by:** Self-funded |
| | - **Organization:** Pinnacle Organization |
| | - **Shared by:** Ragul |
| | - **Model type:** Instruction-tuned Language Model (History) |
| | - **Language(s):** English |
| | - **License:** Apache 2.0 |
| | - **Fine-tuned from:** `unsloth/Llama-3.2-1B-Instruct` |
| |
|
| | ### Model Sources |
| | - **Model Repository:** [https://huggingface.co/ragul2607/SicMundus] |
| | - **Dataset:** [https://huggingface.co/datasets/ragul2607/history-llm] |
| |
|
| | ## Uses |
| |
|
| | ### Direct Use |
| | - Answering history questions (school/competitive level) |
| | - Explaining historical events, causes, impacts |
| | - Preparing students for TN SSLC exams |
| | - Educational support for teachers and learners |
| |
|
| | ### Downstream Use |
| | - Fine-tuning for regional curriculums (e.g., CBSE, ICSE) |
| | - History-focused edtech solutions |
| | - AI-based tutoring and exam practice tools |
| |
|
| | ### Out-of-Scope Use |
| | - General programming, math, or science tasks |
| | - Legal, financial, or medical advice |
| | - Real-time decision-critical systems |
| |
|
| | ## Bias, Risks, and Limitations |
| |
|
| | Since the model is trained on curated historical Q&A, it may exhibit dataset-induced biases or regional perspectives. It is not intended to be used as a definitive authority on history, especially for critical or controversial events. |
| |
|
| | **Recommendation:** Always cross-check with textbooks or official curriculum content. |
| |
|
| | ## Getting Started |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import torch |
| | |
| | model_path = "ragul2607/SicMundus" |
| | tokenizer = AutoTokenizer.from_pretrained(model_path) |
| | model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto") |
| | |
| | prompt = """Below is an input followed by its expected output. Complete the task appropriately. |
| | |
| | ### Input: |
| | Explain the causes of the French Revolution. |
| | |
| | ### Output: |
| | """ |
| | |
| | inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
| | outputs = model.generate(**inputs, max_new_tokens=512) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | |