Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,8 @@ that given an answer it responds with a question. I call it AQ model because it
|
|
| 13 |
This AQ-model is useful in coversations with another LLM-QA-chatbot, so that the conversation does not get stuck but moves continously to new topics.
|
| 14 |
If you have an automatic conversation between two LLMs, one QA-LLM and one AQ-LLM the conversation will not get stuck and repetitive but continue forever :-)
|
| 15 |
|
| 16 |
-
The model was finetuned starting from t5-small on a NVidia RTX 3090 in about 1 1/2h with a batch size of 8, using 4 GB of RAM on the GPU.
|
|
|
|
| 17 |
The same model trained with a batch size of 32 gave sligthly worse results (14.3 RAM GB on the GPU in 1 hour).
|
| 18 |
|
| 19 |
Test with
|
|
|
|
| 13 |
This AQ-model is useful in coversations with another LLM-QA-chatbot, so that the conversation does not get stuck but moves continously to new topics.
|
| 14 |
If you have an automatic conversation between two LLMs, one QA-LLM and one AQ-LLM the conversation will not get stuck and repetitive but continue forever :-)
|
| 15 |
|
| 16 |
+
The model was finetuned starting from t5-small on a NVidia RTX 3090 in about 1 1/2h with a batch size of 8, using 4 GB of RAM on the GPU.
|
| 17 |
+
As the GPU was running at 320W, the energy to train this model was 480Wh.
|
| 18 |
The same model trained with a batch size of 32 gave sligthly worse results (14.3 RAM GB on the GPU in 1 hour).
|
| 19 |
|
| 20 |
Test with
|