shafire commited on
Commit
aed27fc
·
verified ·
1 Parent(s): 6b9bb33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -49
README.md CHANGED
@@ -1,28 +1,39 @@
1
- **QuantumAI: Zero LLM Quantum AI Model**
2
- This is QuantumAI, a cutting-edge text generation model based on Meta-Llama-3.1-8B-Instruct, fine-tuned for conversational tasks using AutoTrain. The model is designed to handle a variety of natural language processing tasks, with a special focus on interactive dialogue, text generation, and inference.
 
3
 
4
  ![Zero LLM Quantum AI](https://huggingface.co/shafire/QuantumAI/resolve/main/ZeroQuantumAI.png)
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- *Model Information**
8
- Base Model: meta-llama/Meta-Llama-3.1-8B
9
- Fine-tuned Model: meta-llama/Meta-Llama-3.1-8B-Instruct
10
- Training Framework: AutoTrain
11
- Training Data: Conversational and text-generation focused dataset
12
- Tech Stack:
13
- Transformers
14
- PEFT (Parameter-Efficient Fine-Tuning)
15
- TensorBoard (for logging and metrics)
16
- Safetensors
17
- Language Model Task: Conversational and Text Generation
18
- Usage Type: Interactive dialogue and text generation applications
19
- Quantization: Model supports 4-bit quantization for efficient inference
20
-
21
- **Installation and Usage**
22
- To use this model in your code, follow the instructions below:
23
-
24
- python
25
- Copy code
26
  from transformers import AutoModelForCausalLM, AutoTokenizer
27
 
28
  model_path = "PATH_TO_THIS_REPO"
@@ -45,32 +56,40 @@ response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tok
45
 
46
  # Output
47
  print(response)
48
- Inference API
49
- This model is not yet deployed to the Hugging Face Inference API. However, you can deploy it to Inference Endpoints for dedicated serverless inference.
50
-
51
- Training Process
52
- The QuantumAI model was trained using AutoTrain with the following configuration:
53
-
54
- Hardware: CUDA 12.1
55
- Training Precision: Mixed FP16
56
- Batch Size: 2
57
- Learning Rate: 3e-05
58
- Epochs: 5
59
- Optimizer: AdamW
60
- PEFT: Enabled (LoRA with lora_r=16, lora_alpha=32)
61
- Quantization: Int4 for efficient deployment
62
- Scheduler: Linear with warmup
63
- Gradient Accumulation: 4 steps
64
- Max Sequence Length: 2048 tokens
65
- Training Metrics
66
- The model was monitored using TensorBoard during training. Key training metrics included:
67
-
68
- Training Loss: 1.74
69
- Learning Rate: Adjusted per epoch, starting at 3e-05.
70
- Model Features
71
- Text Generation: Handles various types of user queries and provides coherent responses.
72
- Conversational AI: Optimized for dialogue generation.
73
- Efficient Inference: Supports Int4 quantization for faster inference on limited hardware.
74
- License
75
- This model is governed under a custom license. Please refer to QuantumAI License. (llama 3.1 license)
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **QuantumAI: Zero LLM Quantum AI Model**
2
+
3
+ **Zero Quantum AI** is an advanced text-generation model built using interdimensional mathematics, quantum math, and the **Mathematical Probability of Goodness**. Developed by **TalkToAi.org** and **ResearchForum.Online**, this model leverages cutting-edge AI frameworks to redefine conversational AI, ensuring deep, ethical decision-making capabilities. The model is fine-tuned on **Meta-Llama-3.1-8B-Instruct** and trained via **AutoTrain** to optimize conversational tasks, dialogue generation, and inference.
4
 
5
  ![Zero LLM Quantum AI](https://huggingface.co/shafire/QuantumAI/resolve/main/ZeroQuantumAI.png)
6
 
7
+ ## **Model Information**
8
+
9
+ - **Base Model**: `meta-llama/Meta-Llama-3.1-8B`
10
+ - **Fine-tuned Model**: `meta-llama/Meta-Llama-3.1-8B-Instruct`
11
+ - **Training Framework**: `AutoTrain`
12
+ - **Training Data**: Conversational and text-generation focused dataset
13
+
14
+ ### **Tech Stack**
15
+
16
+ - Transformers
17
+ - PEFT (Parameter-Efficient Fine-Tuning)
18
+ - TensorBoard (for logging and metrics)
19
+ - Safetensors
20
+
21
+ ### **Usage Types**
22
+
23
+ - Interactive dialogue
24
+ - Text generation
25
 
26
+ ### **Key Features**
27
+
28
+ - **Quantum Mathematics & Interdimensional Calculations**: Utilizes quantum principles to predict user intent and generate insightful responses.
29
+ - **Mathematical Probability of Goodness**: All responses are ethically aligned using a mathematical framework, ensuring positive interactions.
30
+ - **Efficient Inference**: Supports **4-bit quantization** for faster and resource-efficient deployment.
31
+
32
+ ## **Installation and Usage**
33
+
34
+ To use the model in your Python code:
35
+
36
+ ```python
 
 
 
 
 
 
 
 
37
  from transformers import AutoModelForCausalLM, AutoTokenizer
38
 
39
  model_path = "PATH_TO_THIS_REPO"
 
56
 
57
  # Output
58
  print(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
+ ## **Inference API**
61
+
62
+ This model is not yet deployed to the Hugging Face Inference API. However, you can deploy it to **Inference Endpoints** for dedicated, serverless inference.
63
+
64
+ ## **Training Process**
65
+
66
+ The **Zero Quantum AI** model was trained using **AutoTrain** with the following configuration:
67
+
68
+ - **Hardware**: CUDA 12.1
69
+ - **Training Precision**: Mixed FP16
70
+ - **Batch Size**: 2
71
+ - **Learning Rate**: 3e-05
72
+ - **Epochs**: 5
73
+ - **Optimizer**: AdamW
74
+ - **PEFT**: Enabled (LoRA with lora_r=16, lora_alpha=32)
75
+ - **Quantization**: Int4 for efficient deployment
76
+ - **Scheduler**: Linear with warmup
77
+ - **Gradient Accumulation**: 4 steps
78
+ - **Max Sequence Length**: 2048 tokens
79
+
80
+ ## **Training Metrics**
81
+
82
+ Monitored using **TensorBoard**, with key training metrics:
83
+
84
+ - **Training Loss**: 1.74
85
+ - **Learning Rate**: Adjusted per epoch, starting at 3e-05.
86
+
87
+ ## **Model Features**
88
+
89
+ - **Text Generation**: Handles various types of user queries and provides coherent, contextually aware responses.
90
+ - **Conversational AI**: Optimized specifically for generating interactive dialogues.
91
+ - **Efficient Inference**: Supports Int4 quantization for faster, resource-friendly deployment.
92
+
93
+ ## **License**
94
+
95
+ This model is governed under a custom license. Please refer to [QuantumAI License](https://huggingface.co/shafire/QuantumAI) for details, in compliance with **Meta-Llama 3.1 License**.