akhooli commited on
Commit
0094d5f
·
verified ·
1 Parent(s): 2faea16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: akhooli/llama31pretrained2
3
  language:
4
- - en
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
@@ -11,16 +11,17 @@ tags:
11
  - trl
12
  ---
13
 
14
- # This Model
15
- This is a partially fine tuned Llama 3.1 8B LLM for poetry generation. It is based on a 10% of 1 epoch continued pretraining of the
16
- [Llama 3.1 8B LLM](akhooli/llama31pretrained2). Training was done on [200k articles from Arabic Wikipedia 2023](akhooli/arwiki_128).
 
17
  This is just a proof of concept demo and should never be used for production. It is also not aligned and is likely to produce strange and unaccepted content.
18
  Only the adapter is available (along with other config files). To use it, you can either install Unsloth or use the HuggingFace PEFT API.
19
  See installation instructions at the Unsloth's link below (only one GPU).
20
  See the [LinkedIn Post](https://www.linkedin.com/posts/akhooli_a-toy-arabic-poetry-llm-finally-i-am-sharing-activity-7242053356062466048-xRUq)
21
  and [X tweet](https://x.com/akhooli/status/1836307030488895886)
22
 
23
- Here's a simple usage example (raw output) - and remember, it is a primitive toy model using freely available compute.
24
 
25
  ```python
26
  max_seq_length = 256
@@ -72,4 +73,4 @@ pprint(r)
72
 
73
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
74
 
75
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
  base_model: akhooli/llama31pretrained2
3
  language:
4
+ - ar
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
 
11
  - trl
12
  ---
13
 
14
+ # This Model (toy Arabic classical poetry llm)
15
+ This is a partially (one epoch, subset of Arabic classical poetry dataset) fine tuned Llama 3.1 8B LLM for poetry generation. It is based on a 10% of 1 epoch continued pretraining of the
16
+ [Llama 3.1 8B LLM](akhooli/llama31pretrained2). Training was done on [200k articles from Arabic Wikipedia 2023](akhooli/arwiki_128)
17
+ with article lengh in the range 128 - 8192 words (not tokens).
18
  This is just a proof of concept demo and should never be used for production. It is also not aligned and is likely to produce strange and unaccepted content.
19
  Only the adapter is available (along with other config files). To use it, you can either install Unsloth or use the HuggingFace PEFT API.
20
  See installation instructions at the Unsloth's link below (only one GPU).
21
  See the [LinkedIn Post](https://www.linkedin.com/posts/akhooli_a-toy-arabic-poetry-llm-finally-i-am-sharing-activity-7242053356062466048-xRUq)
22
  and [X tweet](https://x.com/akhooli/status/1836307030488895886)
23
 
24
+ Here's a simple usage example (raw output) - and remember, it is a __primitive toy model__ using freely available compute.
25
 
26
  ```python
27
  max_seq_length = 256
 
73
 
74
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
75
 
76
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)