Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,72 @@ library_name: transformers
|
|
| 18 |
This model was converted to GGUF format from [`Spestly/AwA-1.5B`](https://huggingface.co/Spestly/AwA-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 19 |
Refer to the [original model card](https://huggingface.co/Spestly/AwA-1.5B) for more details on the model.
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
## Use with llama.cpp
|
| 22 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 23 |
|
|
|
|
| 18 |
This model was converted to GGUF format from [`Spestly/AwA-1.5B`](https://huggingface.co/Spestly/AwA-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 19 |
Refer to the [original model card](https://huggingface.co/Spestly/AwA-1.5B) for more details on the model.
|
| 20 |
|
| 21 |
+
---
|
| 22 |
+
Model details:
|
| 23 |
+
-
|
| 24 |
+
AwA (Answers with Athena) is my portfolio project, showcasing a cutting-edge Chain-of-Thought (CoT) reasoning model. I created AwA to excel in providing detailed, step-by-step answers to complex questions across diverse domains. This model represents my dedication to advancing AI’s capability for enhanced comprehension, problem-solving, and knowledge synthesis.
|
| 25 |
+
|
| 26 |
+
Key Features
|
| 27 |
+
|
| 28 |
+
Chain-of-Thought Reasoning: AwA delivers step-by-step breakdowns of solutions, mimicking logical human thought processes.
|
| 29 |
+
|
| 30 |
+
Domain Versatility: Performs exceptionally across a wide range of domains, including mathematics, science, literature, and more.
|
| 31 |
+
|
| 32 |
+
Adaptive Responses: Adjusts answer depth and complexity based on input queries, catering to both novices and experts.
|
| 33 |
+
|
| 34 |
+
Interactive Design: Designed for educational tools, research assistants, and decision-making systems.
|
| 35 |
+
|
| 36 |
+
Intended Use Cases
|
| 37 |
+
|
| 38 |
+
Educational Applications: Supports learning by breaking down complex problems into manageable steps.
|
| 39 |
+
|
| 40 |
+
Research Assistance: Generates structured insights and explanations in academic or professional research.
|
| 41 |
+
|
| 42 |
+
Decision Support: Enhances understanding in business, engineering, and scientific contexts.
|
| 43 |
+
|
| 44 |
+
General Inquiry: Provides coherent, in-depth answers to everyday questions.
|
| 45 |
+
|
| 46 |
+
Type: Chain-of-Thought (CoT) Reasoning Model
|
| 47 |
+
|
| 48 |
+
Base Architecture: Adapted from [qwen2]
|
| 49 |
+
|
| 50 |
+
Parameters: [1.54B]
|
| 51 |
+
|
| 52 |
+
Fine-tuning: Specialized fine-tuning on Chain-of-Thought reasoning datasets to enhance step-by-step explanatory capabilities.
|
| 53 |
+
|
| 54 |
+
Ethical Considerations
|
| 55 |
+
|
| 56 |
+
Bias Mitigation: I have taken steps to minimise biases in the training data. However, users are encouraged to cross-verify outputs in sensitive contexts.
|
| 57 |
+
|
| 58 |
+
Limitations: May not provide exhaustive answers for niche topics or domains outside its training scope.
|
| 59 |
+
|
| 60 |
+
User Responsibility: Designed as an assistive tool, not a replacement for expert human judgment.
|
| 61 |
+
|
| 62 |
+
Usage
|
| 63 |
+
|
| 64 |
+
Option A: Local
|
| 65 |
+
|
| 66 |
+
Using locally with the Transformers library
|
| 67 |
+
|
| 68 |
+
# Use a pipeline as a high-level helper
|
| 69 |
+
from transformers import pipeline
|
| 70 |
+
|
| 71 |
+
messages = [
|
| 72 |
+
{"role": "user", "content": "Who are you?"},
|
| 73 |
+
]
|
| 74 |
+
pipe = pipeline("text-generation", model="Spestly/AwA-1.5B")
|
| 75 |
+
pipe(messages)
|
| 76 |
+
|
| 77 |
+
Option B: API & Space
|
| 78 |
+
|
| 79 |
+
You can use the AwA HuggingFace space or the AwA API (Coming soon!)
|
| 80 |
+
|
| 81 |
+
Roadmap
|
| 82 |
+
|
| 83 |
+
More AwA model sizes e.g 7B and 14B
|
| 84 |
+
Create AwA API via spestly package
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
## Use with llama.cpp
|
| 88 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 89 |
|