Update README.md
Browse files
README.md
CHANGED
|
@@ -14,65 +14,49 @@ tags:
|
|
| 14 |
- transformers.js
|
| 15 |
widget:
|
| 16 |
- text: >-
|
| 17 |
-
Teapot is an open-source small language model (~800 million parameters)
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
can
|
| 22 |
-
|
| 23 |
-
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
|
| 24 |
-
generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
|
| 25 |
-
as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
|
| 26 |
-
by and for the community.
|
| 27 |
|
| 28 |
|
| 29 |
What devices can teapot run on?
|
| 30 |
example_title: Question Answering
|
| 31 |
- text: >-
|
| 32 |
-
Teapot is an open-source small language model (~800 million parameters)
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
can
|
| 37 |
-
|
| 38 |
-
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
|
| 39 |
-
generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
|
| 40 |
-
as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
|
| 41 |
-
by and for the community.
|
| 42 |
|
| 43 |
|
| 44 |
Tell me about teapotllm
|
| 45 |
example_title: Summarization Answering
|
| 46 |
- text: >-
|
| 47 |
-
Teapot is an open-source small language model (~800 million parameters)
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
can
|
| 52 |
-
|
| 53 |
-
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
|
| 54 |
-
generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
|
| 55 |
-
as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
|
| 56 |
-
by and for the community.
|
| 57 |
|
| 58 |
|
| 59 |
Extract the number of parameters
|
| 60 |
example_title: Information Extraction
|
| 61 |
- text: >-
|
| 62 |
-
Teapot is an open-source small language model (~800 million parameters)
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
can
|
| 67 |
-
|
| 68 |
-
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
|
| 69 |
-
generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
|
| 70 |
-
as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
|
| 71 |
-
by and for the community.
|
| 72 |
|
| 73 |
|
| 74 |
How many parameters is Deepseek?
|
| 75 |
-
example_title: Hallucination Resistance
|
| 76 |
base_model:
|
| 77 |
- google/flan-t5-large
|
| 78 |
pipeline_tag: text2text-generation
|
|
|
|
| 14 |
- transformers.js
|
| 15 |
widget:
|
| 16 |
- text: >-
|
| 17 |
+
Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
|
| 18 |
+
Teapot is trained to only answer using context from documents, reducing hallucinations.
|
| 19 |
+
Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
|
| 20 |
+
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
|
| 21 |
+
TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
|
| 22 |
+
Teapot is a model built by and for the community.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
|
| 25 |
What devices can teapot run on?
|
| 26 |
example_title: Question Answering
|
| 27 |
- text: >-
|
| 28 |
+
Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
|
| 29 |
+
Teapot is trained to only answer using context from documents, reducing hallucinations.
|
| 30 |
+
Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
|
| 31 |
+
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
|
| 32 |
+
TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
|
| 33 |
+
Teapot is a model built by and for the community.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
|
| 36 |
Tell me about teapotllm
|
| 37 |
example_title: Summarization Answering
|
| 38 |
- text: >-
|
| 39 |
+
Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
|
| 40 |
+
Teapot is trained to only answer using context from documents, reducing hallucinations.
|
| 41 |
+
Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
|
| 42 |
+
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
|
| 43 |
+
TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
|
| 44 |
+
Teapot is a model built by and for the community.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
|
| 47 |
Extract the number of parameters
|
| 48 |
example_title: Information Extraction
|
| 49 |
- text: >-
|
| 50 |
+
Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
|
| 51 |
+
Teapot is trained to only answer using context from documents, reducing hallucinations.
|
| 52 |
+
Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
|
| 53 |
+
TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
|
| 54 |
+
TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
|
| 55 |
+
Teapot is a model built by and for the community.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
|
| 58 |
How many parameters is Deepseek?
|
| 59 |
+
example_title: Hallucination Resistance
|
| 60 |
base_model:
|
| 61 |
- google/flan-t5-large
|
| 62 |
pipeline_tag: text2text-generation
|