File size: 3,361 Bytes
6cc0138
 
 
c9461ba
 
6cc0138
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: mit
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# πŸš€ IoraX 3B β€” Efficient Conversational AI Model
![IoraX Logo](./IoraX.png)


## ✨ Model Overview

**IoraX 3B** is a highly efficient 3-billion parameter Transformer, fine-tuned using LoRA adapters on Meta LLaMA 3.2 (3B) β€” with 4-bit quantization to keep it lightning fast and lightweight! 

This model specializes in deep conversational understanding, logical reasoning, and coherent long-form generation β€” your AI companion for research, education, and creative tasks.

---

## 🎯 Features & Capabilities

- 🧠 **Size:** 3B parameters  
- βš™οΈ **Base:** Meta LLaMA 3.2 (3B)  
- πŸ”§ **Fine-tuning:** LoRA with 4-bit quantization  
- ⏳ **Max context length:** 2048 tokens (with RoPE scaling)  
- πŸ“š **Training data:** Blend of public conversational datasets + expert-curated Q&A  
- πŸ”„ **Epochs:** 3 for balanced speed and learning  
- 🌍 **Language:** English  

---

## πŸš€ Use Cases

| Use Case                | Description                               |  
|------------------------|-----------------------------------------|  
| πŸ’¬ Conversational AI   | Customer support, chatbots, assistants  |  
| πŸŽ“ Education           | Tutoring, concept explanation, Q&A      |  
| πŸ§ͺ Research Assistant  | Drafting, summarizing, brainstorming     |  
| ✍️ Creative Writing    | Storytelling, script generation          |  

---

## ⚠️ Limitations

- πŸ“… **Knowledge cutoff:** Data up to 2023 only  
- βš–οΈ **Bias:** May reflect biases present in the training corpus  
- βœ”οΈ **Accuracy:** Verify important outputs, especially in critical domains  
- πŸ§‘β€βš–οΈ **Not a replacement for experts:** Use responsibly  

---

## πŸ’‘ Quick Start

```python
from transformers import AutoTokenizer
from unsloth import FastLanguageModel

model_name = "XythicK/IoraX-3B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = FastLanguageModel.from_pretrained(model_name, load_in_4bit=True, max_seq_length=2048)

messages = [
    {"role": "user", "content": "Explain the philosophical significance of the Eiffel Tower. πŸŒ‰πŸ€”"}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    tokenize=True, 
    add_generation_prompt=True, 
    return_tensors="pt"
).to("cuda")

outputs = model.generate(
    input_ids=inputs, 
    max_new_tokens=128, 
    temperature=1.2, 
    use_cache=True
)

print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```

## πŸ™‹ Contact

**Maintainer:** **M Mashhudur Rahim [XythicK]**
 
**Role:**  
**Independent Machine Learning Researcher & Model Infrastructure Maintainer** 

(Focused on model quantization, optimization, and efficient deployment)


For issues, improvement requests, or additional quantization formats, please use the Hugging Face Discussions or Issues tab.

## πŸ“„ Citation

If you use IoraX in your work, please cite:
```bibtex
@misc{ioraX2025,
  title = {IoraX 3B: Efficient Conversational AI},
  author = {M Mashhudur Rahim (XythicK)},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/XythicK/IoraX-3B}}
}
```

## ❀️ Acknowledgements

Thanks to Hugging Face and the open-source machine learning community for providing the tools and platforms that make efficient model sharing and deployment possible.