RichardErkhov commited on
Commit
9ae06e2
·
verified ·
1 Parent(s): 1d90b4f

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Python_Ass - GGUF
11
+ - Model creator: https://huggingface.co/chrisnic/
12
+ - Original model: https://huggingface.co/chrisnic/Python_Ass/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Python_Ass.Q2_K.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q2_K.gguf) | Q2_K | 2.96GB |
18
+ | [Python_Ass.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
19
+ | [Python_Ass.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.IQ3_S.gguf) | IQ3_S | 3.43GB |
20
+ | [Python_Ass.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
21
+ | [Python_Ass.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.IQ3_M.gguf) | IQ3_M | 3.52GB |
22
+ | [Python_Ass.Q3_K.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q3_K.gguf) | Q3_K | 3.74GB |
23
+ | [Python_Ass.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
24
+ | [Python_Ass.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
25
+ | [Python_Ass.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
26
+ | [Python_Ass.Q4_0.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q4_0.gguf) | Q4_0 | 4.34GB |
27
+ | [Python_Ass.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
28
+ | [Python_Ass.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
29
+ | [Python_Ass.Q4_K.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q4_K.gguf) | Q4_K | 4.58GB |
30
+ | [Python_Ass.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
31
+ | [Python_Ass.Q4_1.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q4_1.gguf) | Q4_1 | 4.78GB |
32
+ | [Python_Ass.Q5_0.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q5_0.gguf) | Q5_0 | 5.21GB |
33
+ | [Python_Ass.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
34
+ | [Python_Ass.Q5_K.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q5_K.gguf) | Q5_K | 5.34GB |
35
+ | [Python_Ass.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
36
+ | [Python_Ass.Q5_1.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q5_1.gguf) | Q5_1 | 5.65GB |
37
+ | [Python_Ass.Q6_K.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q6_K.gguf) | Q6_K | 6.14GB |
38
+ | [Python_Ass.Q8_0.gguf](https://huggingface.co/RichardErkhov/chrisnic_-_Python_Ass-gguf/blob/main/Python_Ass.Q8_0.gguf) | Q8_0 | 7.95GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: llama3.1
46
+ language:
47
+ - en
48
+ - it
49
+ base_model:
50
+ - meta-llama/Llama-3.1-8B
51
+ pipeline_tag: text-generation
52
+ library_name: transformers
53
+ tags:
54
+ - code
55
+ ---
56
+ # Python Code Assistant based on LLaMA 3.1
57
+
58
+ This model is a specialized Python coding assistant, fine-tuned from LLaMA 3.1 8B Instruct using a two-stage training approach with carefully curated Python programming datasets.
59
+
60
+ ## Model Description
61
+
62
+ The model has been trained to assist with Python programming tasks through a progressive fine-tuning approach:
63
+
64
+ ### First Training Stage
65
+ - Base Model: LLaMA 3.1 8B Instruct
66
+ - Dataset: [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
67
+ - Training Focus: Understanding Python programming instructions and generating appropriate code responses
68
+
69
+ ### Second Training Stage
70
+ - Dataset: [flytech/python-codes-25k](https://huggingface.co/datasets/flytech/python-codes-25k)
71
+ - Focus: Enhancing code generation capabilities and understanding of advanced Python concepts
72
+
73
+ ### Training Methodology
74
+
75
+ The model employs several advanced training techniques to ensure optimal performance:
76
+
77
+ - **LoRA Fine-tuning Parameters**:
78
+ - Rank (r): 8
79
+ - Alpha: 16
80
+ - Dropout: 0.1
81
+ - Target Modules: Query and Value Projections
82
+
83
+ - **Training Optimizations**:
84
+ - 4-bit quantization (NF4 format)
85
+ - Gradient checkpointing
86
+ - Dynamic learning rate adjustment
87
+ - Early stopping with patience=3
88
+ - Adaptive batch processing
89
+ - Memory-efficient training with automated cleanup
90
+
91
+ ### Model Architecture
92
+ - Base Architecture: LLaMA 3.1 8B Instruct
93
+ - Training Format: 4-bit quantization with double quantization
94
+ - Memory Efficient: Optimized for deployment with reduced memory footprint
95
+
96
+ ## Intended Uses
97
+
98
+ This model is designed for:
99
+ - Generating Python code from natural language descriptions
100
+ - Assisting with code completion and suggestions
101
+ - Explaining Python concepts and best practices
102
+ - Helping with code debugging and optimization
103
+ - Supporting Python development tasks
104
+
105
+ ## Training Data
106
+
107
+ The model was trained on a combination of:
108
+ 1. 18,000 Python programming instructions and implementations from the Alpaca dataset
109
+ 2. 25,000 Python code examples and explanations
110
+
111
+ ## Performance and Limitations
112
+
113
+ ### Strengths
114
+ - Specialized in Python programming tasks
115
+ - Memory-efficient implementation
116
+ - Trained with gradient stability monitoring
117
+ - Optimized for practical coding assistance
118
+
119
+ ### Limitations
120
+ - Limited to Python programming language
121
+ - Based on LLaMA 3.1's knowledge cutoff
122
+ - May require context for complex programming tasks
123
+
124
+ ## Usage Tips
125
+
126
+ To get the best results from this model:
127
+ 1. Provide clear and specific instructions
128
+ 2. Include relevant context when asking for code
129
+ 3. Specify any particular Python version or library requirements
130
+ 4. Mention any performance or style preferences
131
+
132
+ ## Training Hardware Requirements
133
+
134
+ The model was trained using:
135
+ - GPU RTX4090 24GB VRAM
136
+ - CUDA compatibility
137
+ - Optimized for memory efficiency through 4-bit quantization
138
+
139
+ ## License and Usage Rights
140
+ - Base model: LLaMA 3.1 license applies
141
+ - Additional training: [Specify your license]
142
+
143
+ ## Citation and Contact
144
+
145
+ [christiannicoletti75@gmail.com]
146
+