prithivMLmods commited on
Commit
af03c6f
·
verified ·
1 Parent(s): 1ecb781

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -1
README.md CHANGED
@@ -24,4 +24,81 @@ tags:
24
  - Java
25
  - Qwen
26
  - Math
27
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  - Java
25
  - Qwen
26
  - Math
27
+ ---
28
+ # **Raptor-X2**
29
+
30
+ > Raptor-X2 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for advanced mathematical explanations, scientific reasoning, and general-purpose coding. It excels in contextual understanding, logical deduction, and multi-step problem-solving. Raptor-X2 has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
31
+
32
+ Key improvements include:
33
+ 1. **Enhanced Mathematical Reasoning**: Provides step-by-step explanations for complex mathematical problems, making it useful for students, researchers, and professionals.
34
+ 2. **Advanced Scientific Understanding**: Excels in explaining scientific concepts across physics, chemistry, biology, and engineering.
35
+ 3. **General-Purpose Coding**: Capable of generating, debugging, and optimizing code across multiple programming languages, supporting software development and automation.
36
+ 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
37
+ 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
38
+
39
+ # **Quickstart with transformers**
40
+
41
+ Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ model_name = "prithivMLmods/Raptor-X2"
47
+
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model_name,
50
+ torch_dtype="auto",
51
+ device_map="auto"
52
+ )
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
54
+
55
+ prompt = "Explain the fundamental theorem of calculus."
56
+ messages = [
57
+ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
58
+ {"role": "user", "content": prompt}
59
+ ]
60
+ text = tokenizer.apply_chat_template(
61
+ messages,
62
+ tokenize=False,
63
+ add_generation_prompt=True
64
+ )
65
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
66
+
67
+ generated_ids = model.generate(
68
+ **model_inputs,
69
+ max_new_tokens=512
70
+ )
71
+ generated_ids = [
72
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
73
+ ]
74
+
75
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
76
+ ```
77
+
78
+ # **Intended Use**
79
+ 1. **Mathematical Explanation**:
80
+ Designed for providing step-by-step solutions to mathematical problems, including algebra, calculus, and discrete mathematics.
81
+ 2. **Scientific Reasoning**:
82
+ Suitable for explaining scientific theories, conducting physics simulations, and solving chemistry equations.
83
+ 3. **Programming and Software Development**:
84
+ Capable of generating, analyzing, and optimizing code in multiple programming languages.
85
+ 4. **Educational Assistance**:
86
+ Helps students and researchers by providing explanations, summaries, and structured learning material.
87
+ 5. **Multilingual Applications**:
88
+ Supports global communication, translations, and multilingual content generation.
89
+ 6. **Long-Form Content Generation**:
90
+ Can generate extended responses, including research papers, documentation, and technical reports.
91
+
92
+ # **Limitations**
93
+ 1. **Hardware Requirements**:
94
+ Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
95
+ 2. **Potential Bias in Responses**:
96
+ While designed to be neutral, outputs may still reflect biases present in training data.
97
+ 3. **Complexity in Some Scientific Domains**:
98
+ While proficient in general science, highly specialized fields may require verification.
99
+ 4. **Limited Real-World Awareness**:
100
+ Does not have access to real-time events beyond its training cutoff.
101
+ 5. **Error Propagation in Extended Outputs**:
102
+ Minor errors in early responses may affect overall coherence in long-form outputs.
103
+ 6. **Prompt Sensitivity**:
104
+ The effectiveness of responses may depend on how well the input prompt is structured.